### What changes were proposed in this pull request?
Since Set can't check is NaN value is contained in current set.
With codegen, only when value set contains NaN then we have necessary to check if the value is NaN, or we just need t
o check is the Set contains the value.
### Why are the changes needed?
Improve generated code's performance. Make only check NaN when Set contains NaN.
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
Existed UT
Closes#34097 from AngersZhuuuu/SPARK-36838.
Authored-by: Angerszhuuuu <angers.zhu@gmail.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
### What changes were proposed in this pull request?
This PR aims to upgrade `kubernetes-client` dependency from 5.7.3 to 5.8.0 for Apache Spark 3.3.
### Why are the changes needed?
This will add K8s Model v1.22.1 for the developers who are using `ExternalClusterManager` in K8s environments.
- https://github.com/fabric8io/kubernetes-client/releases/tag/v5.8.0
### Does this PR introduce _any_ user-facing change?
No.
### How was this patch tested?
Pass the CIs.
Closes#34109 from dongjoon-hyun/SPARK-36859.
Authored-by: Dongjoon Hyun <dongjoon@apache.org>
Signed-off-by: Dongjoon Hyun <dongjoon@apache.org>
### What changes were proposed in this pull request?
Handle incorrect parsing of negative ANSI typed interval literals
[SPARK-36851](https://issues.apache.org/jira/browse/SPARK-36851)
### Why are the changes needed?
Incorrect result:
```
spark-sql> select interval -'1' year;
1-0
```
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
Add ut testcase
Closes#34107 from Peng-Lei/SPARK-36851.
Authored-by: PengLei <peng.8lei@gmail.com>
Signed-off-by: Gengliang Wang <gengliang@apache.org>
### What changes were proposed in this pull request?
Fix an issue where Maven may stuck in an infinite loop when building Spark, for Hadoop 2.7 profile.
### Why are the changes needed?
After re-enabling `createDependencyReducedPom` for `maven-shade-plugin`, Spark build stopped working for Hadoop 2.7 profile and will stuck in an infinitely loop, likely due to a Maven shade plugin bug similar to https://issues.apache.org/jira/browse/MSHADE-148. This seems to be caused by the fact that, under `hadoop-2.7` profile, variable `hadoop-client-runtime.artifact` and `hadoop-client-api.artifact`are both `hadoop-client` which triggers the issue.
As a workaround, this changes `hadoop-client-runtime.artifact` to be `hadoop-yarn-api` when using `hadoop-2.7`. Since `hadoop-yarn-api` is a dependency of `hadoop-client`, this essentially moves the former to the same level as the latter. It should have no effect as both are dependencies of Spark.
### Does this PR introduce _any_ user-facing change?
No.
### How was this patch tested?
N/A
Closes#34100 from sunchao/SPARK-36835-followup.
Authored-by: Chao Sun <sunchao@apple.com>
Signed-off-by: Gengliang Wang <gengliang@apache.org>
### What changes were proposed in this pull request?
Modify `OffHeapColumnVector.reserveInternal` to handle ANSI intervals - `DayTimeIntervalType` and `YearMonthIntervalType`.
### Why are the changes needed?
The changes fix the issue which the example below demonstrates:
```scala
scala> spark.conf.set("spark.sql.columnVector.offheap.enabled", true)
scala> spark.read.parquet("/Users/maximgekk/tmp/parquet_offheap").show()
21/09/25 22:09:03 ERROR Executor: Exception in task 0.0 in stage 3.0 (TID 3)
java.lang.RuntimeException: Unhandled YearMonthIntervalType(0,1)
at org.apache.spark.sql.execution.vectorized.OffHeapColumnVector.reserveInternal(OffHeapColumnVector.java:562)
```
SPARK-36854 shows how the parquet files in `/Users/maximgekk/tmp/parquet_offheap` were prepared.
### Does this PR introduce _any_ user-facing change?
No.
### How was this patch tested?
By running the modified test suite:
```
$ build/sbt "sql/test:testOnly *ParquetIOSuite"
```
Closes#34106 from MaxGekk/ansi-interval-OffHeapColumnVector.
Authored-by: Max Gekk <max.gekk@gmail.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
### What changes were proposed in this pull request?
In the PR, I propose to add new tests to check:
1. Creating of a table with ANSI intervals using parquet and In-Memory catalog.
2. Parquet encoding of ANSI interval columns.
### Why are the changes needed?
To improve test coverage.
### Does this PR introduce _any_ user-facing change?
No.
### How was this patch tested?
By running the modified test suites:
```
$ build/sbt "sql/test:testOnly *ParquetEncodingSuite"
$ build/sbt "sql/test:testOnly *ParquetV1QuerySuite"
$ build/sbt "sql/test:testOnly *ParquetV2QuerySuite"
```
Closes#34105 from MaxGekk/interval-parquet-tests.
Authored-by: Max Gekk <max.gekk@gmail.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
### What changes were proposed in this pull request?
This PR proposes to run daily build for Hadoop 2 profile in GitHub Actions.
This can be considered for backports to reduce conflicts.
### Why are the changes needed?
In order to improve test coverage and catch bugs e.g.) https://github.com/apache/spark/pull/34064
### Does this PR introduce _any_ user-facing change?
No, dev-only.
### How was this patch tested?
Being tested in my own fork.
Closes#34091 from HyukjinKwon/SPARK-36839.
Authored-by: Hyukjin Kwon <gurwls223@apache.org>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
### What changes were proposed in this pull request?
Explicitly specifies error codes when ignoring type hint errors.
### Why are the changes needed?
We use a lot of `type: ignore` annotation to ignore type hint errors in pandas-on-Spark.
We should explicitly specify the error codes to make it clear what kind of error is being ignored, then the type hint checker can check more cases.
### Does this PR introduce _any_ user-facing change?
No.
### How was this patch tested?
Existing tests.
Closes#34102 from ueshin/issues/SPARK-36847/type_ignore.
Authored-by: Takuya UESHIN <ueshin@databricks.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
### What changes were proposed in this pull request?
Don't open watch when not needed
### Why are the changes needed?
In spark-submit, we currently open a pod watch for any spark submission. If WAIT_FOR_APP_COMPLETION is false, we then immediately ignore the result of the watcher and break out of the watcher.
When submitting spark applications at scale, this is a source of operational pain, since opening the watch relies on opening a websocket, which tends to run into subtle networking issues around negotiating the websocket connection.
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
Standard tests
Closes#34095 from slothspot/spark-35174.
Authored-by: Dmytro Melnychenko <dmytro.i.am@gmail.com>
Signed-off-by: Dongjoon Hyun <dongjoon@apache.org>
### What changes were proposed in this pull request?
This PR proposes to improve simplifications of `EqualTo/EqualNullSafe` binary comparators when one side is a boolean literal.
For example: `EqualTo(predicate, TrueLiteral) => predicate`, `EqualNullSafe(predicate, TrueLiteral) => predicate if !predicate.nullable`
This PR helps pushing down the filter and reducing unnecessary IO.
### Why are the changes needed?
The following query does not push down the filter in the current implementation
```
SELECT * FROM t WHERE (a AND b) = true
```
although the following equivalent query pushes down the filter as expected.
```
SELECT * FROM t WHERE (a AND b)
```
That is because the first query creates `EqualTo(And(a, b), TrueLiteral)` that is simply not in the form that we can push down. However, we should be able to get it simplified to `And(a, b)`
It is fair for Spark SQL users to expect `(a AND b) = true` performs the same as `(a AND b)`.
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
Added unit tests
```
build/sbt "testOnly *BooleanSimplificationSuite -- -z SPARK-36721"
```
Closes#34055 from kazuyukitanimura/SPARK-36721.
Authored-by: Kazuyuki Tanimura <ktanimura@apple.com>
Signed-off-by: Liang-Chi Hsieh <viirya@gmail.com>
### What changes were proposed in this pull request?
Improve the perf and memory usage of cleaning up stage UI data. The new code make copy of the essential fields(stage id, attempt id, completion time) to an array and determine which stage data and `RDDOperationGraphWrapper` needs to be clean based on it
### Why are the changes needed?
Fix the memory usage issue described in https://issues.apache.org/jira/browse/SPARK-36827
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
Add new unit test for the InMemoryStore.
Also, run a simple benchmark with
```
val testConf = conf.clone()
.set(MAX_RETAINED_STAGES, 1000)
val listener = new AppStatusListener(store, testConf, true)
val stages = (1 to 5000).map { i =>
val s = new StageInfo(i, 0, s"stage$i", 4, Nil, Nil, "details1",
resourceProfileId = ResourceProfile.DEFAULT_RESOURCE_PROFILE_ID)
s.submissionTime = Some(i.toLong)
s
}
listener.onJobStart(SparkListenerJobStart(4, time, Nil, null))
val start = System.nanoTime()
stages.foreach { s =>
time +=1
s.submissionTime = Some(time)
listener.onStageSubmitted(SparkListenerStageSubmitted(s, new Properties()))
s.completionTime = Some(time)
listener.onStageCompleted(SparkListenerStageCompleted(s))
}
println(System.nanoTime() - start)
```
Before changes:
InMemoryStore: 1.2s
After changes:
InMemoryStore: 0.23s
Closes#34092 from gengliangwang/cleanStage.
Authored-by: Gengliang Wang <gengliang@apache.org>
Signed-off-by: Gengliang Wang <gengliang@apache.org>
### What changes were proposed in this pull request?
The optimizer rule `CollapseProject` used to inline expressions with correlated scalar subqueries, which can lead to inefficient and invalid plans. This issue was fixed in #33958. This PR adds an additional test to verify the behavior.
### Why are the changes needed?
To make sure CollapseProject works with correlated scalar subqueries.
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
Unit test.
Closes#33903 from allisonwang-db/spark-36656-collapse-project-subquery.
Authored-by: allisonwang-db <allison.wang@databricks.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
### What changes were proposed in this pull request?
InSet should handle NaN
```
InSet(Literal(Double.NaN), Set(Double.NaN, 1d)) should return true, but return false.
```
### Why are the changes needed?
InSet should handle NaN
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
Added UT
Closes#34033 from AngersZhuuuu/SPARK-36792.
Authored-by: Angerszhuuuu <angers.zhu@gmail.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
### What changes were proposed in this pull request?
Allow saving and loading of ANSI intervals - `YearMonthIntervalType` and `DayTimeIntervalType` to/from the Parquet datasource. After the changes, Spark saves ANSI intervals as primitive physical Parquet types:
- year-month intervals as `INT32`
- day-time intervals as `INT64`
w/o any modifications. To load the values as intervals back, Spark puts the info about interval types to the extra key `org.apache.spark.sql.parquet.row.metadata`:
```
$ java -jar parquet-tools-1.12.0.jar meta ./part-...-c000.snappy.parquet
creator: parquet-mr version 1.12.1 (build 2a5c06c58fa987f85aa22170be14d927d5ff6e7d)
extra: org.apache.spark.version = 3.3.0
extra: org.apache.spark.sql.parquet.row.metadata = {"type":"struct","fields":[...,{"name":"i","type":"interval year to month","nullable":false,"metadata":{}}]}
file schema: spark_schema
--------------------------------------------------------------------------------
...
i: REQUIRED INT32 R:0 D:0
```
**Note:** The given PR focus on support of ANSI intervals in the Parquet datasource via write or read as a column in `Dataset`.
### Why are the changes needed?
To improve user experience with Spark SQL. At the moment, users can make ANSI intervals "inside" Spark or parallelize Java collections of `Period`/`Duration` objects but cannot save the intervals to any built-in datasources. After the changes, users can save datasets/dataframes with year-month/day-time intervals to load them back later by Apache Spark.
For example:
```scala
scala> sql("select date'today' - date'2021-01-01' as diff").write.parquet("/Users/maximgekk/tmp/parquet_interval")
scala> val readback = spark.read.parquet("/Users/maximgekk/tmp/parquet_interval")
readback: org.apache.spark.sql.DataFrame = [diff: interval day]
scala> readback.printSchema
root
|-- diff: interval day (nullable = true)
scala> readback.show
+------------------+
| diff|
+------------------+
|INTERVAL '264' DAY|
+------------------+
```
### Does this PR introduce _any_ user-facing change?
In some sense, yes. Before the changes, users get an error while saving of ANSI intervals as dataframe columns to parquet files but the operation should complete successfully after the changes.
### How was this patch tested?
1. By running the existing test suites:
```
$ build/sbt "test:testOnly *ParquetFileFormatV2Suite"
$ build/sbt "test:testOnly *FileBasedDataSourceSuite"
$ build/sbt "sql/test:testOnly *JsonV2Suite"
```
2. Added new tests:
```
$ build/sbt "sql/test:testOnly *ParquetIOSuite"
$ build/sbt "sql/test:testOnly *ParquetSchemaSuite"
```
Closes#34057 from MaxGekk/ansi-interval-save-parquet.
Authored-by: Max Gekk <max.gekk@gmail.com>
Signed-off-by: Max Gekk <max.gekk@gmail.com>
### What changes were proposed in this pull request?
This is a followup of https://github.com/apache/spark/pull/32816 to simplify and improve the code:
1. Add a `SkewJoinChildWrapper` to wrap the skew join children, so that `EnsureRequirements` rule will skip them and save time
2. Remove `SkewJoinAwareCost` and keep using `SimpleCost`. We can put `numSkews` in the first 32 bits.
### Why are the changes needed?
code simplification and improvement
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
existing tests
Closes#34080 from cloud-fan/follow.
Authored-by: Wenchen Fan <wenchen@databricks.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
### What changes were proposed in this pull request?
Enable `createDependencyReducedPom` for Spark's Maven shaded plugin so that the effective pom won't contain those shaded artifacts such as `org.eclipse.jetty`
### Why are the changes needed?
At the moment, the effective pom leaks transitive dependencies to downstream apps for those shaded artifacts, which potentially will cause issues.
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
I manually tested and the `core/dependency-reduced-pom.xml` no longer contains dependencies such as `jetty-XX`.
Closes#34085 from sunchao/SPARK-36835.
Authored-by: Chao Sun <sunchao@apple.com>
Signed-off-by: Gengliang Wang <gengliang@apache.org>
### What changes were proposed in this pull request?
This reverts commit 866df69c62.
### Why are the changes needed?
After the change environment variables were not substituted in user classpath entries. Please find an example on SPARK-35672.
### Does this PR introduce _any_ user-facing change?
No.
### How was this patch tested?
Existing tests.
Closes#34082 from peter-toth/SPARK-35672-revert.
Authored-by: Peter Toth <peter.toth@gmail.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
This PR adds the pyspark API `df.withMetadata(columnName, metadata)`. The scala API is added in this PR https://github.com/apache/spark/pull/33853.
### What changes were proposed in this pull request?
To make it easy to use/modify the semantic annotation, we want to have a shorter API to update the metadata in a dataframe. Currently we have `df.withColumn("col1", col("col1").alias("col1", metadata=metadata))` to update the metadata without changing the column name, and this is too verbose. We want to have a syntax suger API `df.withMetadata("col1", metadata=metadata)` to achieve the same functionality.
### Why are the changes needed?
A bit of background for the frequency of the update: We are working on inferring the semantic data types and use them in AutoML and store the semantic annotation in the metadata. So in many cases, we will suggest the user update the metadata to correct the wrong inference or add the annotation for weak inference.
### Does this PR introduce _any_ user-facing change?
Yes.
A syntax suger API `df.withMetadata("col1", metadata=metadata)` to achieve the same functionality as`df.withColumn("col1", col("col1").alias("col1", metadata=metadata))`.
### How was this patch tested?
doctest.
Closes#34021 from liangz1/withMetadataPython.
Authored-by: Liang Zhang <liang.zhang@databricks.com>
Signed-off-by: Weichen Xu <weichen.xu@databricks.com>
### What changes were proposed in this pull request?
This PR fixes a JavaDoc style error introduced in SPARK-36760 (#34073).
Due to this error, build on GA fails and the following error message appears.
```
[error] * internal -> external data conversion.
```
### Why are the changes needed?
To recover GA.
### Does this PR introduce _any_ user-facing change?
No.
### How was this patch tested?
Should be done by GA itself.
Closes#34078 from sarutak/fix-doc-error.
Authored-by: Kousuke Saruta <sarutak@oss.nttdata.com>
Signed-off-by: Kousuke Saruta <sarutak@oss.nttdata.com>
### What changes were proposed in this pull request?
update java doc...
### Why are the changes needed?
to highlight the difference between this new interface `SupportsPushDownV2Filters` and the old one `SupportsPushDownFilters`
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
Test not needed
Closes#34073 from huaxingao/followup.
Authored-by: Huaxin Gao <huaxin_gao@apple.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
### What changes were proposed in this pull request?
This's a follow-up of https://github.com/apache/spark/pull/34043. This PR proposes to only handle shuffle blocks in the separate thread pool and leave other blocks the same behavior as it is.
### Why are the changes needed?
To avoid any potential overhead.
### Does this PR introduce _any_ user-facing change?
No.
### How was this patch tested?
Pass existing tests.
Closes#34076 from Ngone51/spark-36782-follow-up.
Authored-by: yi.wu <yi.wu@databricks.com>
Signed-off-by: Gengliang Wang <gengliang@apache.org>
### What changes were proposed in this pull request?
This PR fixes the 'options' description on `UnresolvedRelation`. This comment was added in https://github.com/apache/spark/pull/29535 but not valid anymore because V1 also uses this `options` (and merge the options with the table properties) per https://github.com/apache/spark/pull/29712.
This PR can go through from `master` to `branch-3.1`.
### Why are the changes needed?
To make `UnresolvedRelation.options`'s description clearer.
### Does this PR introduce _any_ user-facing change?
No, dev-only.
### How was this patch tested?
Scala linter by `dev/linter-scala`.
Closes#34075 from HyukjinKwon/minor-comment-unresolved-releation.
Authored-by: Hyukjin Kwon <gurwls223@apache.org>
Signed-off-by: Huaxin Gao <huaxin_gao@apple.com>
### What changes were proposed in this pull request?
Adds validation that the SQLSTATEs in the error class JSON are a subset of those provided in the README.
### Why are the changes needed?
Validation of error class JSON
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
Unit test
Closes#33627 from karenfeng/check-sqlstates.
Authored-by: Karen Feng <karen.feng@databricks.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
### What changes were proposed in this pull request?
This PR proposes improving test coverage for pandas-on-Spark Series & Index code base, which is written in `series.py` and `indexes/*.py` separately.
This PR did the following to improve coverage:
- Add unittest for untested code
- Fix unittest which is not tested properly
- Remove unused code
**NOTE**: This PR is not only include the test-only update, for example it includes the new warning for `__xor__`, `__and__`, `__or__`.
### Why are the changes needed?
To make the project healthier by improving coverage.
### Does this PR introduce _any_ user-facing change?
No.
### How was this patch tested?
Unittest.
Closes#33844 from itholic/SPARK-36506.
Authored-by: itholic <haejoon.lee@databricks.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
### What changes were proposed in this pull request?
Delegate potentially blocking call to `mapOutputTracker.updateMapOutput` from within `UpdateBlockInfo` from `dispatcher-BlockManagerMaster` to the threadpool to avoid blocking the endpoint. This code path is only accessed for `ShuffleIndexBlockId`, other blocks are still executed on the `dispatcher-BlockManagerMaster` itself.
Change `updateBlockInfo` to return `Future[Boolean]` instead of `Boolean`. Response will be sent to RPC caller upon successful completion of the future.
Introduce a unit test that forces `MapOutputTracker` to make a broadcast as part of `MapOutputTracker.serializeOutputStatuses` when running decommission tests.
### Why are the changes needed?
[SPARK-36782](https://issues.apache.org/jira/browse/SPARK-36782) describes a deadlock occurring if the `dispatcher-BlockManagerMaster` is allowed to block while waiting for write access to data structures.
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
Unit test as introduced in this PR.
---
Ping eejbyfeldt for notice.
Closes#34043 from f-thiele/SPARK-36782.
Lead-authored-by: Fabian A.J. Thiele <fabian.thiele@posteo.de>
Co-authored-by: Emil Ejbyfeldt <eejbyfeldt@liveintent.com>
Co-authored-by: Fabian A.J. Thiele <fthiele@liveintent.com>
Signed-off-by: Gengliang Wang <gengliang@apache.org>
### What changes were proposed in this pull request?
This PR adds a check in the optimizer rule `CollapseProject` to avoid combining Project with Aggregate when the project list contains one or more correlated scalar subqueries that reference the output of the aggregate. Combining Project with Aggregate can lead to an invalid plan after correlated subquery rewrite. This is because correlated scalar subqueries' references are used as join conditions, which cannot host aggregate expressions.
For example
```sql
select (select sum(c2) from t where c1 = cast(s as int)) from (select sum(c2) s from t)
```
```
== Optimized Logical Plan ==
Aggregate [sum(c2)#10L AS scalarsubquery(s)#11L] <--- Aggregate has neither grouping nor aggregate expressions.
+- Project [sum(c2)#10L]
+- Join LeftOuter, (c1#2 = cast(sum(c2#3) as int)) <--- Aggregate expression in join condition
:- LocalRelation [c2#3]
+- Aggregate [c1#2], [sum(c2#3) AS sum(c2)#10L, c1#2]
+- LocalRelation [c1#2, c2#3]
java.lang.UnsupportedOperationException: Cannot generate code for expression: sum(input[0, int, false])
```
Currently, we only allow a correlated scalar subquery in Aggregate if it is also in the grouping expressions.
079a9c5292/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/subquery.scala (L661-L666)
### Why are the changes needed?
To fix an existing optimizer issue.
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
Unit test.
Closes#33990 from allisonwang-db/spark-36747-collapse-agg.
Authored-by: allisonwang-db <allison.wang@databricks.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
### What changes were proposed in this pull request?
The PR fixes SPARK-36791 by replacing JHS_POST with JHS_HOST
### Why are the changes needed?
There are spelling mistakes in running-on-yarn.md file where JHS_POST should be JHS_HOST
### Does this PR introduce any user-facing change?
No
### How was this patch tested?
Not needed for docs
Closes#34031 from jiaoqingbo/jiaoqingbo.
Authored-by: jiaoqb <jiaoqb@asiainfo.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
### What changes were proposed in this pull request?
Add secant and cosecant as R functions.
### Why are the changes needed?
[SEC and CSC have been added](https://github.com/apache/spark/pull/33988) thus these functions need R support.
### Does this PR introduce _any_ user-facing change?
Yes, users can now call those functions as R functions.
### How was this patch tested?
unit tests added.
Closes#34067 from yutoacts/SPARK-36824.
Authored-by: Yuto Akutsu <yuto.akutsu@oss.nttdata.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
### What changes were proposed in this pull request?
This PR adds the support of understanding `numpy.typing` package that's added from NumPy 1.21.
### Why are the changes needed?
For user-friendly return type specification in type hints for function apply APIs in pandas API on Spark.
### Does this PR introduce _any_ user-facing change?
Yes, this PR will enable users to specify return type as `numpy.typing.NDArray[...]` to internally specify pandas UDF's return type.
For example,
```python
import pandas as pd
import pyspark.pandas as ps
pdf = pd.DataFrame(
{"a": [1, 2, 3, 4, 5, 6, 7, 8, 9], "b": [[e] for e in [4, 5, 6, 3, 2, 1, 0, 0, 0]]},
index=np.random.rand(9),
)
psdf = ps.from_pandas(pdf)
def func(x) -> ps.DataFrame[float, [int, ntp.NDArray[int]]]:
return x
psdf.pandas_on_spark.apply_batch(func)
```
### How was this patch tested?
Unittest and e2e tests were added.
Closes#34028 from HyukjinKwon/SPARK-36708.
Authored-by: Hyukjin Kwon <gurwls223@apache.org>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
### What changes were proposed in this pull request?
For query
```
select array_except(array(cast('nan' as double), 1d), array(cast('nan' as double)))
```
This returns [NaN, 1d], but it should return [1d].
This issue is caused by `OpenHashSet` can't handle `Double.NaN` and `Float.NaN` too.
In this pr fix this based on https://github.com/apache/spark/pull/33955
### Why are the changes needed?
Fix bug
### Does this PR introduce _any_ user-facing change?
ArrayExcept won't show handle equal `NaN` value
### How was this patch tested?
Added UT
Closes#33994 from AngersZhuuuu/SPARK-36753.
Authored-by: Angerszhuuuu <angers.zhu@gmail.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
### What changes were proposed in this pull request?
This PR fixes an issue when reading of a Parquet file written with legacy mode would fail due to incorrect Parquet LIST to ArrayType conversion.
The issue arises when using schema evolution and utilising the parquet-mr reader. 2-level LIST annotated types could be parsed incorrectly as 3-level LIST annotated types because their underlying element type does not match the full inferred Catalyst schema.
### Why are the changes needed?
It appears to be a long-standing issue with the legacy mode due to the imprecise check in ParquetRowConverter that was trying to determine Parquet backward compatibility using Catalyst schema: `DataType.equalsIgnoreCompatibleNullability(guessedElementType, elementType)` in https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetRowConverter.scala#L606.
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
Added a new test case in ParquetInteroperabilitySuite.scala.
Closes#34044 from sadikovi/parquet-legacy-write-mode-list-issue.
Authored-by: Ivan Sadikov <ivan.sadikov@databricks.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
### What changes were proposed in this pull request?
Change `nonEquiCond` to all join condition at `JoinSelection.ExtractEquiJoinKeys` pattern.
### Why are the changes needed?
At `JoinSelection`, with `ExtractEquiJoinKeys`, we use `nonEquiCond` as the join condition. It's wrong since there should exist some equi condition.
```
Seq(joins.BroadcastNestedLoopJoinExec(
planLater(left), planLater(right), buildSide, joinType, nonEquiCond))
```
But it's should not be a bug, since we always use the smj as the default join strategy for ExtractEquiJoinKeys.
### Does this PR introduce _any_ user-facing change?
no
### How was this patch tested?
it's not a bug for now, but we should fix it in case we use this code path in future.
Closes#34065 from ulysses-you/join-condition.
Authored-by: ulysses-you <ulyssesyou18@gmail.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
Co-Authored-By: DB Tsai d_tsaiapple.com
Co-Authored-By: Huaxin Gao huaxin_gaoapple.com
### What changes were proposed in this pull request?
This is the 2nd PR for V2 Filter support. This PR does the following:
- Add interface SupportsPushDownV2Filters
Future work:
- refactor `OrcFilters`, `ParquetFilters`, `JacksonParser`, `UnivocityParser` so both V1 file source and V2 file source can use them
- For V2 file source: implement v2 filter -> parquet/orc filter. csv and Json don't have real filters, but also need to change the current code to have v2 filter -> `JacksonParser`/`UnivocityParser`
- For V1 file source, keep what we currently have: v1 filter -> parquet/orc filter
- We don't need v1filter.toV2 and v2filter.toV1 since we have two separate paths
The reasons that we have reached the above conclusion:
- The major motivation to implement V2Filter is to eliminate the unnecessary conversion between Catalyst types and Scala types when using Filters.
- We provide this `SupportsPushDownV2Filters` in this PR so V2 data source (e.g. iceberg) can implement it and use V2 Filters
- There are lots of work to implement v2 filters in the V2 file sources because of the following reasons:
possible approaches for implementing V2Filter:
1. keep what we have for file source v1: v1 filter -> parquet/orc filter
file source v2 we will implement v2 filter -> parquet/orc filter
We don't need v1->v2 and v2->v1
problem with this approach: there are lots of code duplication
2. We will implement v2 filter -> parquet/orc filter
file source v1: v1 filter -> v2 filter -> parquet/orc filter
We will need V1 -> V2
This is the approach I am using in https://github.com/apache/spark/pull/33973
In that PR, I have
v2 orc: v2 filter -> orc filter
V1 orc: v1 -> v2 -> orc filter
v2 csv: v2->v1, new UnivocityParser
v1 csv: new UnivocityParser
v2 Json: v2->v1, new JacksonParser
v1 Json: new JacksonParser
csv and Json don't have real filters, they just use filter references, should be OK to use either v1 and v2. Easier to use
v1 because no need to change.
I haven't finished parquet yet. The PR doesn't have the parquet V2Filter implementation, but I plan to have
v2 parquet: v2 filter -> parquet filter
v1 parquet: v1 -> v2 -> parquet filter
Problem with this approach:
1. It's not easy to implement V1->V2 because V2 filter have `LiteralValue` and needs type info. We already lost the type information when we convert Expression filer to v1 filter.
2. parquet is OK
Use Timestamp as example, parquet filter takes long for timestamp
v2 parquet: v2 filter -> parquet filter
timestamp
Expression (Long) -> v2 filter (LiteralValue Long)-> parquet filter (Long)
V1 parquet: v1 -> v2 -> parquet filter
timestamp
Expression (Long) -> v1 filter (timestamp) -> v2 filter (LiteralValue Long)-> parquet filter (Long)
but we have problem for orc because orc filter takes java Timestamp
v2 orc: v2 filter -> orc filter
timestamp
Expression (Long) -> v2 filter (LiteralValue Long)-> parquet filter (Timestamp)
V1 orc: v1 -> v2 -> orc filter
Expression (Long) -> v1 filter (timestamp) -> v2 filter (LiteralValue Long)-> parquet filter (Timestamp)
This defeats the purpose of implementing v2 filters.
3. keep what we have for file source v1: v1 filter -> parquet/orc filter
file source v2: v2 filter -> v1 filter -> parquet/orc filter
We will need V2 -> V1
we have similar problem as approach 2.
So the conclusion is: approach 1 (keep what we have for file source v1: v1 filter -> parquet/orc filter
file source v2 we will implement v2 filter -> parquet/orc filter) is better, but there are lots of code duplication. We will need to refactor `OrcFilters`, `ParquetFilters`, `JacksonParser`, `UnivocityParser` so both V1 file source and V2 file source can use them.
### Why are the changes needed?
Use V2Filters to eliminate the unnecessary conversion between Catalyst types and Scala types.
### Does this PR introduce _any_ user-facing change?
no
### How was this patch tested?
Added new UT
Closes#34001 from huaxingao/v2filter.
Lead-authored-by: Huaxin Gao <huaxin_gao@apple.com>
Co-authored-by: Wenchen Fan <cloud0fan@gmail.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
### What changes were proposed in this pull request?
Disable tests related to LZ4 in `FileSourceCodecSuite`, `FileSuite` and `ParquetCompressionCodecPrecedenceSuite` when using `hadoop-2.7` profile.
### Why are the changes needed?
At the moment, parquet-mr uses LZ4 compression codec provided by Hadoop, and only since HADOOP-17292 (in 3.3.1/3.4.0) the latter added `lz4-java` to remove the restriction that the codec can only be run with native library. As consequence, the test will fail when using `hadoop-2.7` profile.
### Does this PR introduce _any_ user-facing change?
No, it's just test.
### How was this patch tested?
Existing test
Closes#34064 from sunchao/SPARK-36820.
Authored-by: Chao Sun <sunchao@apple.com>
Signed-off-by: Liang-Chi Hsieh <viirya@gmail.com>
### What changes were proposed in this pull request?
Fix `pop` of Categorical Series to be consistent with the latest pandas (1.3.2) behavior.
### Why are the changes needed?
As https://github.com/databricks/koalas/issues/2198, pandas API on Spark behaves differently from pandas on `pop` of Categorical Series.
### Does this PR introduce _any_ user-facing change?
Yes, results of `pop` of Categorical Series change.
#### From
```py
>>> psser = ps.Series(["a", "b", "c", "a"], dtype="category")
>>> psser
0 a
1 b
2 c
3 a
dtype: category
Categories (3, object): ['a', 'b', 'c']
>>> psser.pop(0)
0
>>> psser
1 b
2 c
3 a
dtype: category
Categories (3, object): ['a', 'b', 'c']
>>> psser.pop(3)
0
>>> psser
1 b
2 c
dtype: category
Categories (3, object): ['a', 'b', 'c']
```
#### To
```py
>>> psser = ps.Series(["a", "b", "c", "a"], dtype="category")
>>> psser
0 a
1 b
2 c
3 a
dtype: category
Categories (3, object): ['a', 'b', 'c']
>>> psser.pop(0)
'a'
>>> psser
1 b
2 c
3 a
dtype: category
Categories (3, object): ['a', 'b', 'c']
>>> psser.pop(3)
'a'
>>> psser
1 b
2 c
dtype: category
Categories (3, object): ['a', 'b', 'c']
```
### How was this patch tested?
Unit tests.
Closes#34052 from xinrong-databricks/cat_pop.
Authored-by: Xinrong Meng <xinrong.meng@databricks.com>
Signed-off-by: Takuya UESHIN <ueshin@databricks.com>
### What changes were proposed in this pull request?
Since user always use ctrl+c to stop a starting SC when register with yarn in client mode when resources are tight.
In this time, SC have not register the Shutdown hook, this cause we won't invoke `sc.stop()` when exit the application.
We should register the ShutdownHook earlier when starting a SparkContext.
### Why are the changes needed?
Make sure we will invoke `sc.stop()` when kill a starting SparkContext application.
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
Not need
Closes#33869 from AngersZhuuuu/SPARK-36615.
Authored-by: Angerszhuuuu <angers.zhu@gmail.com>
Signed-off-by: Mridul Muralidharan <mridul<at>gmail.com>
### What changes were proposed in this pull request?
Remove `com.github.rdblue:brotli-codec:0.1.1` dependency.
### Why are the changes needed?
As Stephen Coy pointed out in the dev list, we should not have `com.github.rdblue:brotli-codec:0.1.1` dependency which is not available on Maven Central. This is to avoid possible artifact changes on `Jitpack.io`.
Also, the dependency is for tests only. I suggest that we remove it now to unblock the 3.2.0 release ASAP.
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
GA tests.
Closes#34059 from gengliangwang/removeDeps.
Authored-by: Gengliang Wang <gengliang@apache.org>
Signed-off-by: Dongjoon Hyun <dongjoon@apache.org>
### What changes were proposed in this pull request?
Improve `filter` of single-indexed DataFrame by replacing a long Project with Filter or Join.
### Why are the changes needed?
When the given `items` have too many elements, a long Project is introduced.
We may replace that with `Column.isin` or joining depending on the length of `items` for better performance.
### Does this PR introduce _any_ user-facing change?
No.
### How was this patch tested?
Unit tests.
Closes#33998 from xinrong-databricks/impr_filter.
Authored-by: Xinrong Meng <xinrong.meng@databricks.com>
Signed-off-by: Takuya UESHIN <ueshin@databricks.com>
### What changes were proposed in this pull request?
Change class ColumnarBatch to a non-final class
### Why are the changes needed?
To support better vectorized reading in multiple data source, ColumnarBatch need to be extendable. For example, To support row-level delete( https://github.com/apache/iceberg/issues/3141) in Iceberg's vectorized read, we need to filter out deleted rows in a batch, which requires ColumnarBatch to be extendable.
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
No test needed.
Closes#34054 from flyrain/columnarbatch-extendable.
Authored-by: Yufei Gu <yufei_gu@apple.com>
Signed-off-by: DB Tsai <d_tsai@apple.com>
### What changes were proposed in this pull request?
In the PR, I propose to modify `StructType` to support merging of ANSI interval types with different fields.
### Why are the changes needed?
This will allow merging of schemas from different datasource files.
### Does this PR introduce _any_ user-facing change?
No, the ANSI interval types haven't released yet.
### How was this patch tested?
Added new test to `StructTypeSuite`.
Closes#34049 from MaxGekk/merge-ansi-interval-types.
Authored-by: Max Gekk <max.gekk@gmail.com>
Signed-off-by: Max Gekk <max.gekk@gmail.com>
### What changes were proposed in this pull request?
Refactor `_select_rows_by_iterable` in `iLocIndexer` to use `Column.isin`.
### Why are the changes needed?
For better performance.
After a rough benchmark, a long projection performs worse than `Column.isin`, even when the length of the filtering conditions exceeding `compute.isin_limit`.
So we use `Column.isin` instead.
### Does this PR introduce _any_ user-facing change?
No.
### How was this patch tested?
Existing tests.
Closes#33964 from xinrong-databricks/iloc_select.
Authored-by: Xinrong Meng <xinrong.meng@databricks.com>
Signed-off-by: Takuya UESHIN <ueshin@databricks.com>
### What changes were proposed in this pull request?
Support dropping rows of a single-indexed DataFrame.
Dropping rows and columns at the same time is supported in this PR as well.
### Why are the changes needed?
To increase pandas API coverage.
### Does this PR introduce _any_ user-facing change?
Yes, dropping rows of a single-indexed DataFrame is supported now.
```py
>>> df = ps.DataFrame(np.arange(12).reshape(3, 4), columns=['A', 'B', 'C', 'D'])
>>> df
A B C D
0 0 1 2 3
1 4 5 6 7
2 8 9 10 11
```
#### From
```py
>>> df.drop([0, 1])
Traceback (most recent call last):
...
KeyError: [(0,), (1,)]
>>> df.drop([0, 1], axis=0)
Traceback (most recent call last):
...
NotImplementedError: Drop currently only works for axis=1
>>> df.drop(1)
Traceback (most recent call last):
...
KeyError: [(1,)]
>>> df.drop(index=1)
Traceback (most recent call last):
...
TypeError: drop() got an unexpected keyword argument 'index'
>>> df.drop(index=[0, 1], columns='A')
Traceback (most recent call last):
...
TypeError: drop() got an unexpected keyword argument 'index'
```
#### To
```py
>>> df.drop([0, 1])
A B C D
2 8 9 10 11
>>> df.drop([0, 1], axis=0)
A B C D
2 8 9 10 11
>>> df.drop(1)
A B C D
0 0 1 2 3
2 8 9 10 11
>>> df.drop(index=1)
A B C D
0 0 1 2 3
2 8 9 10 11
>>> df.drop(index=[0, 1], columns='A')
B C D
2 9 10 11
```
### How was this patch tested?
Unit tests.
Closes#33929 from xinrong-databricks/frame_drop.
Authored-by: Xinrong Meng <xinrong.meng@databricks.com>
Signed-off-by: Takuya UESHIN <ueshin@databricks.com>
### What changes were proposed in this pull request?
This PR aims to upgrade R from 3.6.3 to 4.0.4 in K8s R Docker image.
### Why are the changes needed?
`openjdk:11-jre-slim` image is upgraded to `Debian 11`.
```
$ docker run -it openjdk:11-jre-slim cat /etc/os-release
PRETTY_NAME="Debian GNU/Linux 11 (bullseye)"
NAME="Debian GNU/Linux"
VERSION_ID="11"
VERSION="11 (bullseye)"
VERSION_CODENAME=bullseye
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
```
It causes `R 3.5` installation failures in our K8s integration test environment.
- https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder-K8s/47953/
```
The following packages have unmet dependencies:
r-base-core : Depends: libicu63 (>= 63.1-1~) but it is not installable
Depends: libreadline7 (>= 6.0) but it is not installable
E: Unable to correct problems, you have held broken packages.
The command '/bin/sh -c apt-get update && apt install -y gnupg && echo "deb http://cloud.r-project.org/bin/linux/debian buster-cran35/" >> /etc/apt/sources.list && apt-key adv --keyserver keyserver.ubuntu.com --recv-key 'E19F5F87128899B192B1A2C2AD5F960A256A04AF' && apt-get update &&
apt install -y -t buster-cran35 r-base r-base-dev && rm -rf
```
### Does this PR introduce _any_ user-facing change?
Yes, this will recover the installation.
### How was this patch tested?
Succeed to build SparkR docker image in the K8s integration test in Jenkins CI.
- https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder-K8s/47959/
```
Successfully built 32e1a0cd5ff8
Successfully tagged kubespark/spark-r:3.3.0-SNAPSHOT_6e4f7e2d-054d-4978-812f-4f32fc546b51
```
Closes#34048 from dongjoon-hyun/SPARK-36806.
Authored-by: Dongjoon Hyun <dongjoon@apache.org>
Signed-off-by: Dongjoon Hyun <dongjoon@apache.org>
### What changes were proposed in this pull request?
This PR upgrades Kafka from `2.8.0` to `2.8.1`.
### Why are the changes needed?
Kafka `2.8.1` was released a few hours ago, which includes a bunch of bug fix.
https://downloads.apache.org/kafka/2.8.1/RELEASE_NOTES.html
### Does this PR introduce _any_ user-facing change?
No.
### How was this patch tested?
CIs.
Closes#34050 from sarutak/upgrade-kafka-2.8.1.
Authored-by: Kousuke Saruta <sarutak@oss.nttdata.com>
Signed-off-by: Dongjoon Hyun <dongjoon@apache.org>
### What changes were proposed in this pull request?
Add new built-in SQL functions: secant and cosecant, and add them as Scala and Python functions.
### Why are the changes needed?
Cotangent has been supported in Spark SQL but Secant and Cosecant are missing though I believe they can be used as much as cot.
Related Links: [SPARK-20751](https://github.com/apache/spark/pull/17999) [SPARK-36660](https://github.com/apache/spark/pull/33906)
### Does this PR introduce _any_ user-facing change?
Yes, users can now use these functions.
### How was this patch tested?
Unit tests
Closes#33988 from yutoacts/SPARK-36683.
Authored-by: Yuto Akutsu <yuto.akutsu@oss.nttdata.com>
Signed-off-by: Kousuke Saruta <sarutak@oss.nttdata.com>
### What changes were proposed in this pull request?
This PR group exception messages in core/src/main/scala/org/apache/spark/api
### Why are the changes needed?
It will largely help with standardization of error messages and its maintenance.
### Does this PR introduce _any_ user-facing change?
No. Error messages remain unchanged.
### How was this patch tested?
No new tests - pass all original tests to make sure it doesn't break any existing behavior.
Closes#33536 from dgd-contributor/SPARK-36101.
Authored-by: dgd-contributor <dgd_contributor@viettel.com.vn>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>