### What changes were proposed in this pull request?
Add `remove_categories` to `CategoricalAccessor` and `CategoricalIndex`.
### Why are the changes needed?
We should implement `remove_categories` in `CategoricalAccessor` and `CategoricalIndex`.
### Does this PR introduce _any_ user-facing change?
Yes, users will be able to use `remove_categories`.
### How was this patch tested?
Added some tests.
Closes#33474 from ueshin/issues/SPARK-36249/remove_categories.
Authored-by: Takuya UESHIN <ueshin@databricks.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
### What changes were proposed in this pull request?
GHA probably doesn't have the same resources as jenkins so move down from 5 to 3 execs and give a bit more time for them to come up.
### Why are the changes needed?
Test is timing out in GHA
### Does this PR introduce _any_ user-facing change?
No, test only change.
### How was this patch tested?
Run through GHA verify no OOM during WorkerDecommissionExtended
Closes#33467 from holdenk/SPARK-36246-WorkerDecommissionExtendedSuite-flakes-in-GHA.
Lead-authored-by: Holden Karau <holden@pigscanfly.ca>
Co-authored-by: Holden Karau <hkarau@netflix.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
### What changes were proposed in this pull request?
Add `add_categories` to `CategoricalAccessor` and `CategoricalIndex`.
### Why are the changes needed?
We should implement `add_categories` in `CategoricalAccessor` and `CategoricalIndex`.
### Does this PR introduce _any_ user-facing change?
Yes, users will be able to use `add_categories`.
### How was this patch tested?
Added some tests.
Closes#33470 from ueshin/issues/SPARK-36214/add_categories.
Authored-by: Takuya UESHIN <ueshin@databricks.com>
Signed-off-by: Takuya UESHIN <ueshin@databricks.com>
### What changes were proposed in this pull request?
This PR adds the version that added pandas API on Spark in PySpark documentation.
### Why are the changes needed?
To document the version added.
### Does this PR introduce _any_ user-facing change?
No to end user. Spark 3.2 is not released yet.
### How was this patch tested?
Linter and documentation build.
Closes#33473 from HyukjinKwon/SPARK-36253.
Authored-by: Hyukjin Kwon <gurwls223@apache.org>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
### What changes were proposed in this pull request?
This PR adds optimization for scalar and lateral subqueries with OneRowRelation as leaf nodes. It inlines such subqueries before decorrelation to avoid rewriting them as left outer joins. It also introduces a flag to turn on/off this optimization: `spark.sql.optimizer.optimizeOneRowRelationSubquery` (default: True).
For example:
```sql
select (select c1) from t
```
Analyzed plan:
```
Project [scalar-subquery#17 [c1#18] AS scalarsubquery(c1)#22]
: +- Project [outer(c1#18)]
: +- OneRowRelation
+- LocalRelation [c1#18, c2#19]
```
Optimized plan before this PR:
```
Project [c1#18#25 AS scalarsubquery(c1)#22]
+- Join LeftOuter, (c1#24 <=> c1#18)
:- LocalRelation [c1#18]
+- Aggregate [c1#18], [c1#18 AS c1#18#25, c1#18 AS c1#24]
+- LocalRelation [c1#18]
```
Optimized plan after this PR:
```
LocalRelation [scalarsubquery(c1)#22]
```
### Why are the changes needed?
To optimize query plans.
### Does this PR introduce _any_ user-facing change?
No.
### How was this patch tested?
Added new unit tests.
Closes#33284 from allisonwang-db/spark-36063-optimize-subquery-one-row-relation.
Authored-by: allisonwang-db <allison.wang@databricks.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
### What changes were proposed in this pull request?
This PR upgrades `zstd-jni` from `1.5.0-2` to `1.5.0-3`.
`1.5.0-3` was released few days ago.
This release resolves an issue about buffer size calculation, which can affect usage in Spark.
https://github.com/luben/zstd-jni/releases/tag/v1.5.0-3
### Why are the changes needed?
It might be a corner case that skipping length is greater than `2^31 - 1` but it's possible to affect Spark.
### Does this PR introduce _any_ user-facing change?
No.
### How was this patch tested?
CI.
Closes#33464 from sarutak/upgrade-zstd-jni-1.5.0-3.
Authored-by: Kousuke Saruta <sarutak@oss.nttdata.com>
Signed-off-by: Dongjoon Hyun <dongjoon@apache.org>
### What changes were proposed in this pull request?
Rework [PR](https://github.com/apache/spark/pull/33212) with suggestions.
This PR make `spark.read.json()` has the same behavior with Datasource API `spark.read.format("json").load("path")`. Spark should turn a non-nullable schema into nullable when using API `spark.read.json()` by default.
Here is an example:
```scala
val schema = StructType(Seq(StructField("value",
StructType(Seq(
StructField("x", IntegerType, nullable = false),
StructField("y", IntegerType, nullable = false)
)),
nullable = true
)))
val testDS = Seq("""{"value":{"x":1}}""").toDS
spark.read
.schema(schema)
.json(testDS)
.printSchema()
spark.read
.schema(schema)
.format("json")
.load("/tmp/json/t1")
.printSchema()
// root
// |-- value: struct (nullable = true)
// | |-- x: integer (nullable = true)
// | |-- y: integer (nullable = true)
```
Before this pr:
```
// output of spark.read.json()
root
|-- value: struct (nullable = true)
| |-- x: integer (nullable = false)
| |-- y: integer (nullable = false)
```
After this pr:
```
// output of spark.read.json()
root
|-- value: struct (nullable = true)
| |-- x: integer (nullable = true)
| |-- y: integer (nullable = true)
```
- `spark.read.csv()` also has the same problem.
- Datasource API `spark.read.format("json").load("path")` do this logical when resolve relation.
c77acf0bbc/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSource.scala (L415-L421)
### Does this PR introduce _any_ user-facing change?
Yes, `spark.read.json()` and `spark.read.csv()` not respect the user-given schema and always turn it into a nullable schema by default.
### How was this patch tested?
New test.
Closes#33436 from cfmcgrady/SPARK-35912-v3.
Authored-by: Fu Chen <cfmcgrady@gmail.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
### What changes were proposed in this pull request?
Add categories setter to `CategoricalAccessor` and `CategoricalIndex`.
### Why are the changes needed?
We should implement categories setter in `CategoricalAccessor` and `CategoricalIndex`.
### Does this PR introduce _any_ user-facing change?
Yes, users will be able to use categories setter.
### How was this patch tested?
Added some tests.
Closes#33448 from ueshin/issues/SPARK-36188/categories_setter.
Authored-by: Takuya UESHIN <ueshin@databricks.com>
Signed-off-by: Takuya UESHIN <ueshin@databricks.com>
### What changes were proposed in this pull request?
This fixes a case sensitivity issue for desc table commands with partition specified.
### Why are the changes needed?
bugfix
### Does this PR introduce _any_ user-facing change?
yes, but it's a bugfix
### How was this patch tested?
new tests
#### before
```
+-- !query
+DESC EXTENDED t PARTITION (C='Us', D=1)
+-- !query schema
+struct<>
+-- !query output
+org.apache.spark.sql.AnalysisException
+Partition spec is invalid. The spec (C, D) must match the partition spec (c, d) defined in table '`default`.`t`'
+
```
#### after
https://github.com/apache/spark/pull/33424/files#diff-554189c49950974a948f99fa9b7436f615052511660c6a0ae3062fa8ca0a327cR328Closes#33424 from yaooqinn/SPARK-36213.
Authored-by: Kent Yao <yao@apache.org>
Signed-off-by: Kent Yao <yao@apache.org>
### What changes were proposed in this pull request?
For non-datasource Hive tables, e.g. tables written outside of Spark (through Hive or Trino), we have certain optimzations in Spark where we use Spark ORC and Parquet datasources to read these tables ([Ref](fbf53dee37/sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveMetastoreCatalog.scala (L128))) rather than using the Hive serde.
If such a table contains a `path` property, Spark will try to list this path property in addition to the table location when creating an `InMemoryFileIndex`. ([Ref](fbf53dee37/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSource.scala (L575))) This can lead to wrong data if `path` property points to a directory location or an error if `path` is not a location. A concrete example is provided in [SPARK-28266 (comment)](https://issues.apache.org/jira/browse/SPARK-28266?focusedCommentId=17380170&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17380170).
Since these tables were not written through Spark, Spark should not interpret this `path` property as it can be set by an external system with a different meaning.
### Why are the changes needed?
For better compatibility with Hive tables generated by other platforms (non-Spark)
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
Added unit test
Closes#33328 from shardulm94/spark-28266.
Authored-by: Shardul Mahadik <smahadik@linkedin.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
### What changes were proposed in this pull request?
Sometimes, AQE skew join optimization can fail with NPE. This is because AQE tries to get the shuffle block sizes, but some map outputs are missing due to the executor lost or something.
This PR fixes this bug by skipping skew join handling if some map outputs are missing in the `MapOutputTracker`.
### Why are the changes needed?
bug fix
### Does this PR introduce _any_ user-facing change?
no
### How was this patch tested?
a new UT
Closes#33445 from cloud-fan/bug.
Authored-by: Wenchen Fan <wenchen@databricks.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
### What changes were proposed in this pull request?
This PR changes `BaseScriptTransformationExec` for `SparkScriptTransformationExec` to support ANSI interval types.
### Why are the changes needed?
`SparkScriptTransformationExec` support `CalendarIntervalType` so it's better to support ANSI interval types as well.
### Does this PR introduce _any_ user-facing change?
No.
### How was this patch tested?
New test.
Closes#33419 from sarutak/script-transformation-interval.
Authored-by: Kousuke Saruta <sarutak@oss.nttdata.com>
Signed-off-by: Max Gekk <max.gekk@gmail.com>
### What changes were proposed in this pull request?
This is a followup of https://github.com/apache/spark/pull/33222 .
https://github.com/apache/spark/pull/33222 made a mistake that, `RemoveRedundantProjects` may lose the `LOGICAL_PLAN_TAG` tag, even though the logical plan link is retained. This was actually caught by the test `LogicalPlanTagInSparkPlanSuite`, but was not being taken care of.
There is no problem so far, but losing information can always lead to potential bugs.
### Why are the changes needed?
fix a mistake
### Does this PR introduce _any_ user-facing change?
no
### How was this patch tested?
existing test
Closes#33442 from cloud-fan/minor.
Authored-by: Wenchen Fan <wenchen@databricks.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
### What changes were proposed in this pull request?
Adding support for accepting an initial state with flatMapGroupsWithState in batch mode.
### Why are the changes needed?
SPARK-35897 added support for accepting an initial state for streaming queries using flatMapGroupsWithState. the code flow is separate for batch and streaming and required a different PR.
### Does this PR introduce _any_ user-facing change?
Yes as discussed above flatMapGroupsWithState in batch mode can accept an initialState, previously this would throw an UnsupportedOperationException
### How was this patch tested?
Added relevant unit tests in FlatMapGroupsWithStateSuite and modified the tests `JavaDatasetSuite`
Closes#33336 from rahulsmahadev/flatMapGroupsWithStateBatch.
Authored-by: Rahul Mahadev <rahul.mahadev@databricks.com>
Signed-off-by: Tathagata Das <tathagata.das1565@gmail.com>
### What changes were proposed in this pull request?
Removes `FileFormatDataWriterMetricSuite` which duplicated.
### Why are the changes needed?
`FileFormatDataWriterMetricSuite` should be renamed to `InMemoryTableMetricSuite`. But it was wrongly copied.
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
Existing tests.
Closes#33453 from viirya/SPARK-36030-followup.
Authored-by: Liang-Chi Hsieh <viirya@gmail.com>
Signed-off-by: Liang-Chi Hsieh <viirya@gmail.com>
### What changes were proposed in this pull request?
This PR avoid using procedure syntax deprecated in Scala 2.13.
https://github.com/apache/spark/runs/3120481756?check_suite_focus=true
```
[error] /home/runner/work/spark/spark/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/FileFormatDataWriterMetricSuite.scala:44:90: procedure syntax is deprecated: instead, add `: Unit =` to explicitly declare `testMetricOnDSv2`'s return type
[error] private def testMetricOnDSv2(func: String => Unit, checker: Map[Long, String] => Unit) {
[error] ^
[error] /home/runner/work/spark/spark/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/InMemoryTableMetricSuite.scala:44:90: procedure syntax is deprecated: instead, add `: Unit =` to explicitly declare `testMetricOnDSv2`'s return type
[error] private def testMetricOnDSv2(func: String => Unit, checker: Map[Long, String] => Unit) {
[error] ^
[warn] 100 warnings found
[error] two errors found
[error] (sql / Test / compileIncremental) Compilation failed
[error] Total time: 579 s (09:39), completed Jul 21, 2021 4:14:26 AM
```
### Why are the changes needed?
To make the build compatible with Scala 2.13 in Spark.
### Does this PR introduce _any_ user-facing change?
No, dev-only.
### How was this patch tested?
Manually tested:
```bash
./dev/change-scala-version.sh 2.13
./build/mvn -DskipTests -Phive-2.3 -Phive clean package -Pscala-2.13
```
Closes#33452 from HyukjinKwon/SPARK-36030.
Authored-by: Hyukjin Kwon <gurwls223@apache.org>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
### What changes were proposed in this pull request?
We add the interface for DS v2 metrics in SPARK-34366. It is only added for reading path, though. This patch extends the metrics interface to writing path.
### Why are the changes needed?
Complete DS v2 metrics interface support in writing path.
### Does this PR introduce _any_ user-facing change?
No. For developer, yes, as this adds metrics support at DS v2 writing path.
### How was this patch tested?
Added test.
Closes#33239 from viirya/v2-write-metrics.
Authored-by: Liang-Chi Hsieh <viirya@gmail.com>
Signed-off-by: Liang-Chi Hsieh <viirya@gmail.com>
### What changes were proposed in this pull request?
Update trasform's doc to latest code.
![image](https://user-images.githubusercontent.com/46485123/126175747-672cccbc-4e42-440f-8f1e-f00b6dc1be5f.png)
### Why are the changes needed?
keep consistence
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
No
Closes#33362 from AngersZhuuuu/SPARK-36153.
Lead-authored-by: Angerszhuuuu <angers.zhu@gmail.com>
Co-authored-by: AngersZhuuuu <angers.zhu@gmail.com>
Signed-off-by: Sean Owen <srowen@gmail.com>
### What changes were proposed in this pull request?
Spark 3.2.0 will use parquet-mr.1.12.0 version (or higher), that contains the column encryption feature which can be called from Spark SQL. The aim of this PR is to document the use of Parquet encryption in Spark.
### Why are the changes needed?
- To provide information on how to use Parquet column encryption
### Does this PR introduce _any_ user-facing change?
Yes, documents a new feature.
### How was this patch tested?
bundle exec jekyll build
Closes#32895 from ggershinsky/parquet-encryption-doc.
Authored-by: Gidon Gershinsky <ggershinsky@apple.com>
Signed-off-by: Sean Owen <srowen@gmail.com>
### What changes were proposed in this pull request?
1. add "closeStreams" to FileAppender and RollingFileAppender
2. set "closeStreams" to "true" in ExecutorRunner
### Why are the changes needed?
The executor will hang when due disk full or other exceptions which happened in writting to outputStream: the root cause is the "inputStream" is not closed after the error happens:
1. ExecutorRunner creates two files appenders for pipe: one for stdout, one for stderr
2. FileAppender.appendStreamToFile exits the loop when writing to outputStream
3. FileAppender closes the outputStream, but left the inputStream which refers the pipe's stdout and stderr opened
4. The executor will hang when printing the log message if the pipe is full (no one consume the outputs)
5. From the driver side, you can see the task can't be completed for ever
With this fix, the step 4 will throw an exception, the driver can catch up the exception and reschedule the failed task to other executors.
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
Add new tests for the "closeStreams" in FileAppenderSuite
Closes#33263 from jhu-chang/SPARK-35027.
Authored-by: Jie <gt.hu.chang@gmail.com>
Signed-off-by: Sean Owen <srowen@gmail.com>
### What changes were proposed in this pull request?
Add `as_ordered`/`as_unordered` to `CategoricalAccessor` and `CategoricalIndex`.
### Why are the changes needed?
We should implement `as_ordered`/`as_unordered` in `CategoricalAccessor` and `CategoricalIndex` yet.
### Does this PR introduce _any_ user-facing change?
Yes, users will be able to use `as_ordered`/`as_unordered`.
### How was this patch tested?
Added some tests.
Closes#33400 from ueshin/issues/SPARK-36186/as_ordered_unordered.
Authored-by: Takuya UESHIN <ueshin@databricks.com>
Signed-off-by: Takuya UESHIN <ueshin@databricks.com>
### What changes were proposed in this pull request?
The current implement of `Sequence` expression not support step by days for dates.
```
spark-sql> select sequence(date'2021-07-01', date'2021-07-10', interval '3' day);
Error in query: cannot resolve 'sequence(DATE '2021-07-01', DATE '2021-07-10', INTERVAL '3' DAY)' due to data type mismatch:
sequence uses the wrong parameter type. The parameter type must conform to:
1. The start and stop expressions must resolve to the same type.
2. If start and stop expressions resolve to the 'date' or 'timestamp' type
then the step expression must resolve to the 'interval' or
'interval year to month' or 'interval day to second' type,
otherwise to the same type as the start and stop expressions.
; line 1 pos 7;
'Project [unresolvedalias(sequence(2021-07-01, 2021-07-10, Some(INTERVAL '3' DAY), Some(Europe/Moscow)), None)]
+- OneRowRelation
```
### Why are the changes needed?
`DayTimeInterval` has day granularity should as step for dates.
### Does this PR introduce _any_ user-facing change?
'Yes'.
Sequence expression will supports step by `DayTimeInterval` has day granularity for dates.
### How was this patch tested?
New tests.
Closes#33439 from beliefer/SPARK-36222.
Authored-by: gengjiaan <gengjiaan@360.cn>
Signed-off-by: Max Gekk <max.gekk@gmail.com>
### What changes were proposed in this pull request?
Preserve the insertion order of columns in Dataset.withColumns
### Why are the changes needed?
It is the expected behavior. We preserve insertion order in all other places.
### Does this PR introduce _any_ user-facing change?
No. Currently Dataset.withColumns is not actually used anywhere to insert more than one column. This change is to make sure it behaves as expected when it is used for that purpose in future.
### How was this patch tested?
Added test in DatasetSuite
Closes#33423 from koertkuipers/feat-withcolumns-preserve-order.
Authored-by: Koert Kuipers <koert@tresata.com>
Signed-off-by: Liang-Chi Hsieh <viirya@gmail.com>
### What changes were proposed in this pull request?
Forces the selectivity estimate for null-based filters to be in the range `[0,1]`.
### Why are the changes needed?
I noticed in a few TPC-DS query tests that the column statistic null count can be higher than the table statistic row count. In the current implementation, the selectivity estimate for `IsNotNull` is negative.
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
Unit test
Closes#33286 from karenfeng/bound-selectivity-est.
Authored-by: Karen Feng <karen.feng@databricks.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
### What changes were proposed in this pull request?
This PR follows https://github.com/apache/spark/pull/33299 and implement `prettyName` for `MakeTimestampNTZ` and `MakeTimestampLTZ` based on the discussion show below
https://github.com/apache/spark/pull/33299/files#r668423810
### Why are the changes needed?
This PR fix the incorrect alias usecase.
### Does this PR introduce _any_ user-facing change?
'No'.
Modifications are transparent to users.
### How was this patch tested?
Jenkins test.
Closes#33430 from beliefer/SPARK-36046-followup.
Authored-by: gengjiaan <gengjiaan@360.cn>
Signed-off-by: Gengliang Wang <gengliang@apache.org>
### What changes were proposed in this pull request?
Scala 2.13 daily job was added but ideally we should deduplicate it. This PR targets to deduplicate it by creating one more job (`configure-jobs`) that the main job depends on.
`configure-jobs` will properly set the branch, envs, etc. to run the main build properly.
### Why are the changes needed?
To make the maintenance easier
### Does this PR introduce _any_ user-facing change?
No, dev-only.
### How was this patch tested?
See
- https://github.com/HyukjinKwon/spark/actions/runs/1044636792 for a PR
- https://github.com/HyukjinKwon/spark/actions/runs/1048542984 for a cron job
Closes#33410 from HyukjinKwon/SPARK-36204.
Authored-by: Hyukjin Kwon <gurwls223@apache.org>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
### What changes were proposed in this pull request?
Expose databaseExists in pyspark.sql.catalog
### Why are the changes needed?
Was available in scala, but not in pyspark
### Does this PR introduce _any_ user-facing change?
New method databaseExists
### How was this patch tested?
Unit tests in codebase
Closes#33416 from dominikgehl/feature/SPARK-36207.
Lead-authored-by: Dominik Gehl <dog@open.ch>
Co-authored-by: Dominik Gehl <gehl@fastmail.fm>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
### What changes were proposed in this pull request?
When inner field have wrong schema filed name should check field name too.
![image](https://user-images.githubusercontent.com/46485123/126101009-c192d87f-1e18-4355-ad53-1419dacdeb76.png)
### Why are the changes needed?
Early check early faield
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
Added UT
Closes#33409 from AngersZhuuuu/SPARK-36201.
Authored-by: Angerszhuuuu <angers.zhu@gmail.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
### What changes were proposed in this pull request?
Linking to location of SQL API documentation, making it easier and quicker to find it.
### Why are the changes needed?
documentation clarity
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
Only documentation change
Closes#33435 from dominikgehl/feature/SPARK-31907.
Lead-authored-by: Dominik Gehl <dog@open.ch>
Co-authored-by: Dominik Gehl <gehl@fastmail.fm>
Signed-off-by: Sean Owen <srowen@gmail.com>
### What changes were proposed in this pull request?
* Add non-empty partition check in `CustomShuffleReaderExec`
* Make sure `OptimizeLocalShuffleReader` doesn't return empty partition
### Why are the changes needed?
Since SPARK-32083, AQE coalesce always return at least one partition, it should be robust to add non-empty check in `CustomShuffleReaderExec`.
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
not need
Closes#33431 from ulysses-you/non-empty-partition.
Authored-by: ulysses-you <ulyssesyou18@gmail.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
### What changes were proposed in this pull request?
This PR documents about unicode literals added in SPARK-34051 (#31096) and a past PR in `sql-ref-literals.md`.
### Why are the changes needed?
Notice users about the literals.
### Does this PR introduce _any_ user-facing change?
Yes, but just add a sentence.
### How was this patch tested?
Built the document and confirmed the result.
```
SKIP_API=1 bundle exec jekyll build
```
![unicode-literals](https://user-images.githubusercontent.com/4736016/126283923-944dc162-1817-47bc-a7e8-c3145225586b.png)
Closes#33434 from sarutak/unicode-literal-doc.
Authored-by: Kousuke Saruta <sarutak@oss.nttdata.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
### What changes were proposed in this pull request?
This is one of the patches for SPIP SPARK-30602 which is needed for push-based shuffle.
### Summary of the change:
When Executor registers with Shuffle Service, it will encode the merged shuffle dir created and also the application attemptId into the ShuffleManagerMeta into Json. Then in Shuffle Service, it will decode the Json string and get the correct merged shuffle dir and also the attemptId. If the registration comes from a newer attempt, the merged shuffle information will be updated to store the information from the newer attempt.
This PR also refactored the management of the merged shuffle information to avoid concurrency issues.
### Why are the changes needed?
Refer to the SPIP in SPARK-30602.
### Does this PR introduce _any_ user-facing change?
No.
### How was this patch tested?
Added unit tests.
The reference PR with the consolidated changes covering the complete implementation is also provided in SPARK-30602.
We have already verified the functionality and the improved performance as documented in the SPIP doc.
Closes#33078 from zhouyejoe/SPARK-35546.
Authored-by: Ye Zhou <yezhou@linkedin.com>
Signed-off-by: Mridul Muralidharan <mridul<at>gmail.com>
### What changes were proposed in this pull request?
Test is flaky (https://github.com/apache/spark/runs/3109815586):
```
Traceback (most recent call last):
File "/__w/spark/spark/python/pyspark/mllib/tests/test_streaming_algorithms.py", line 391, in test_parameter_convergence
eventually(condition, catch_assertions=True)
File "/__w/spark/spark/python/pyspark/testing/utils.py", line 91, in eventually
raise lastValue
File "/__w/spark/spark/python/pyspark/testing/utils.py", line 82, in eventually
lastValue = condition()
File "/__w/spark/spark/python/pyspark/mllib/tests/test_streaming_algorithms.py", line 387, in condition
self.assertEqual(len(model_weights), len(batches))
AssertionError: 9 != 10
```
Should probably increase timeout
### Why are the changes needed?
To avoid flakiness in the test.
### Does this PR introduce _any_ user-facing change?
Nope, dev-only.
### How was this patch tested?
CI should test it out.
Closes#33427 from HyukjinKwon/SPARK-36216.
Authored-by: Hyukjin Kwon <gurwls223@apache.org>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
### What changes were proposed in this pull request?
Support TimestampNTZType in SparkGetColumnsOperation
### Why are the changes needed?
TimestampNTZType coverage
### Does this PR introduce _any_ user-facing change?
yes, jdbc end-users will be aware of TimestampNTZType
### How was this patch tested?
add new test
Closes#33393 from yaooqinn/SPARK-36179.
Authored-by: Kent Yao <yao@apache.org>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
### What changes were proposed in this pull request?
exposing tableExists in pyspark.sql.catalog
### Why are the changes needed?
avoids pyspark users having to go through listTables
### Does this PR introduce _any_ user-facing change?
Yes, additional tableExists method available in pyspark
### How was this patch tested?
test added
Closes#33388 from dominikgehl/feature/SPARK-36176.
Authored-by: Dominik Gehl <dog@open.ch>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
### What changes were proposed in this pull request?
Fix test failures with `pandas < 1.2`.
### Why are the changes needed?
There are some test failures with `pandas < 1.2`.
### Does this PR introduce _any_ user-facing change?
No.
### How was this patch tested?
Fixed tests.
Closes#33398 from ueshin/issues/SPARK-36167/test.
Authored-by: Takuya UESHIN <ueshin@databricks.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
### What changes were proposed in this pull request?
Support comparison between a Categorical and a scalar.
There are 3 main changes:
- Modify `==` and `!=` from comparing **codes** of the Categorical to the scalar to comparing **actual values** of the Categorical to the scalar.
- Support `<`, `<=`, `>`, `>=` between a Categorical and a scalar.
- TypeError message fix.
### Why are the changes needed?
pandas supports comparison between a Categorical and a scalar, we should follow pandas' behaviors.
### Does this PR introduce _any_ user-facing change?
Yes.
Before:
```py
>>> import pyspark.pandas as ps
>>> import pandas as pd
>>> from pandas.api.types import CategoricalDtype
>>> pser = pd.Series(pd.Categorical([1, 2, 3], categories=[3, 2, 1], ordered=True))
>>> psser = ps.from_pandas(pser)
>>> psser == 2
0 True
1 False
2 False
dtype: bool
>>> psser <= 1
Traceback (most recent call last):
...
NotImplementedError: <= can not be applied to categoricals.
```
After:
```py
>>> import pyspark.pandas as ps
>>> import pandas as pd
>>> from pandas.api.types import CategoricalDtype
>>> pser = pd.Series(pd.Categorical([1, 2, 3], categories=[3, 2, 1], ordered=True))
>>> psser = ps.from_pandas(pser)
>>> psser == 2
0 False
1 True
2 False
dtype: bool
>>> psser <= 1
0 True
1 True
2 True
dtype: bool
```
### How was this patch tested?
Unit tests.
Closes#33373 from xinrong-databricks/categorical_eq.
Authored-by: Xinrong Meng <xinrong.meng@databricks.com>
Signed-off-by: Takuya UESHIN <ueshin@databricks.com>
### What changes were proposed in this pull request?
This is a followup PR for SPARK-36166 (#33411), which adds `BLOCK_SCALA_VERSION` to `sparktestssupport/__init__.py`.
### Why are the changes needed?
The following command fails due to the definition is missing.
```
SCALA_PROFILE=scala2.12 dev/run-tests.py
```
### Does this PR introduce _any_ user-facing change?
No.
### How was this patch tested?
The command shown above works.
Closes#33421 from sarutak/followup-SPARK-36166.
Authored-by: Kousuke Saruta <sarutak@oss.nttdata.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
### What changes were proposed in this pull request?
The current implement of `TimeWindow` only supports `TimestampType`. Spark added a new type `TimestampNTZType`, so we should support `TimestampNTZType` in expression `TimeWindow`.
### Why are the changes needed?
`TimestampNTZType` similar to `TimestampType`, we should support `TimestampNTZType` in expression `TimeWindow`.
### Does this PR introduce _any_ user-facing change?
'Yes'.
`TimeWindow` will accepts `TimestampNTZType`.
### How was this patch tested?
New tests.
Closes#33341 from beliefer/SPARK-36091.
Lead-authored-by: gengjiaan <gengjiaan@360.cn>
Co-authored-by: Jiaan Geng <beliefer@163.com>
Signed-off-by: Gengliang Wang <gengliang@apache.org>
### What changes were proposed in this pull request?
The `DataFrame.to_csv` has `mode` arguments both in pandas and pandas API on Spark.
However, pandas allows the string "w", "w+", "a", "a+" where as pandas-on-Spark allows "append", "overwrite", "ignore", "error" or "errorifexists".
We should map them while `mode` can still accept the existing parameters("append", "overwrite", "ignore", "error" or "errorifexists") as well.
### Why are the changes needed?
APIs in pandas-on-Spark should follows the behavior of pandas for preventing the existing pandas code break.
### Does this PR introduce _any_ user-facing change?
`DataFrame.to_csv` now can accept "w", "w+", "a", "a+" as well, same as pandas.
### How was this patch tested?
Add the unit test and manually write the file with the new acceptable strings.
Closes#33414 from itholic/SPARK-35806.
Authored-by: itholic <haejoon.lee@databricks.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
### What changes were proposed in this pull request?
Updating the pyspark sql readwriter documentation to the level of detail provided by the scala documentation
### Why are the changes needed?
documentation clarity
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
Only documentation change
Closes#33394 from dominikgehl/feature/SPARK-36181.
Authored-by: Dominik Gehl <dog@open.ch>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
### What changes were proposed in this pull request?
The pyspark.sql.catalog APIs were missing from the documentation. PR fixes this omission.
### Why are the changes needed?
Documentation consistency
### Does this PR introduce _any_ user-facing change?
No.
### How was this patch tested?
Documentation change only.
Closes#33392 from dominikgehl/feature/SPARK-36178.
Authored-by: Dominik Gehl <dog@open.ch>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
### What changes were proposed in this pull request?
This PR is more a cleanup. It removes unused `sync-branch` id in some steps, and use `set-env` instead of `set-output` to set an env.
This can be backported to branch-3.2 too.
### Why are the changes needed?
Cleanup.
### Does this PR introduce _any_ user-facing change?
No, dev-only.
### How was this patch tested?
CI in this PR should test it out.
Closes#33412 from HyukjinKwon/minor-cleanup.
Authored-by: Hyukjin Kwon <gurwls223@apache.org>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
### What changes were proposed in this pull request?
This pull request introduces a helper class that simplifies usage of `Dataset.observe()` for batch datasets:
val observation = Observation("name")
val observed = ds.observe(observation, max($"id").as("max_id"))
observed.count()
val metrics = observation.get
### Why are the changes needed?
Currently, users are required to implement the `QueryExecutionListener` interface to retrieve the metrics, as well as apply some knowledge on threading and locking to pull the metrics over to the main thread. With the helper class, metrics can be retrieved from batch dataset processing with three lines of code (the action on the observed dataset does not count as a line of code here).
### Does this PR introduce _any_ user-facing change?
Yes, one new class and one `Dataset`` method.
### How was this patch tested?
Adds a unit test to `DataFrameSuite`, similar to `"get observable metrics by callback"` in `DataFrameCallbackSuite`.
Closes#31905 from EnricoMi/branch-observation.
Authored-by: Enrico Minack <github@enrico.minack.dev>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
### What changes were proposed in this pull request?
This PR is a simple followup from https://github.com/apache/spark/pull/33376:
- It simplifies a bit by removing the default Scala version in the testing script (so we don't have to change here in the future when we change the Scala default version).
- Call `change-scala-version.sh` script (when `SCALA_PROFILE` is explicitly specified)
### Why are the changes needed?
More refactoring. In addition, this change will be used at https://github.com/apache/spark/pull/33410
### Does this PR introduce _any_ user-facing change?
No, dev-only.
### How was this patch tested?
CI in this PR should test it out.
Closes#33411 from HyukjinKwon/SPARK-36166.
Authored-by: Hyukjin Kwon <gurwls223@apache.org>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
### What changes were proposed in this pull request?
This PR fixes two issues highlighted in https://issues.apache.org/jira/browse/SPARK-36163:
- JDBC connection provider propagates incorrect connection properties.
- Ambiguity when more than one JDBC connection provider is available.
I updated `BasicConnectionProvider` to use `jdbcOptions.asConnectionProperties` to remove JDBC data source specific options.
I also added `connectionProvider` data source option that specifies the name of the provider, e.g. `db2`, `presto`, to allow enforcing this specific provider in case of ambiguity.
### Why are the changes needed?
Users can leverage `spark.sql.sources.disabledJdbcConnProviderList` but it is cumbersome and requires them to disable all other providers which could be problematic when using ambiguous providers in two or more different JDBC queries.
### Does this PR introduce _any_ user-facing change?
Yes
PROBLEM DESCRIPTION:
This introduces new JDBC data source option `connectionProvider` that allows users to select a specific JDBC connection provider based on the short name. I updated the SQL guide doc and README.
Before this change, the only way to resolve ambiguity was SQL conf to blacklist all of the other JDBC connection providers. After this change users will be able to specify the exact connection provider they need per data source.
### How was this patch tested?
I updated the existing `ConnectionProviderSuite` and added a new `BasicConnectionProviderSuite`.
Closes#33370 from sadikovi/fix-jdbc-conn-provider.
Authored-by: Ivan Sadikov <ivan.sadikov@databricks.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>