Commit graph

30511 commits

Author SHA1 Message Date
Yuanjian Li 0c31137172 [SPARK-35628][SS][FOLLOW-UP] Fix the consistent break on Scala 2.13 build
### What changes were proposed in this pull request?
Fix the consistent break on Scala 2.13 build caused by the PR https://github.com/apache/spark/pull/32767

### Why are the changes needed?
Fix the consistent break.

### Does this PR introduce _any_ user-facing change?
No.

### How was this patch tested?
Existing tests.

Closes #33084 from xuanyuanking/SPARK-35628-follow.

Authored-by: Yuanjian Li <yuanjian.li@databricks.com>
Signed-off-by: Dongjoon Hyun <dongjoon@apache.org>
2021-06-25 07:08:03 -07:00
Erik Krogen 866df69c62 [SPARK-35672][CORE][YARN] Pass user classpath entries to executors using config instead of command line
### What changes were proposed in this pull request?
Refactor the logic for constructing the user classpath from `yarn.ApplicationMaster` into `yarn.Client` so that it can be leveraged on the executor side as well, instead of having the driver construct it and pass it to the executor via command-line arguments. A new method, `getUserClassPath`, is added to `CoarseGrainedExecutorBackend` which defaults to `Nil` (consistent with the existing behavior where non-YARN resource managers do not configure the user classpath). `YarnCoarseGrainedExecutorBackend` overrides this to construct the user classpath from the existing `APP_JAR` and `SECONDARY_JARS` configs.

### Why are the changes needed?
User-provided JARs are made available to executors using a custom classloader, so they do not appear on the standard Java classpath. Instead, they are passed as a list to the executor which then creates a classloader out of the URLs. Currently in the case of YARN, this list of JARs is crafted by the Driver (in `ExecutorRunnable`), which then passes the information to the executors (`CoarseGrainedExecutorBackend`) by specifying each JAR on the executor command line as `--user-class-path /path/to/myjar.jar`. This can cause extremely long argument lists when there are many JARs, which can cause the OS argument length to be exceeded, typically manifesting as the error message:

> /bin/bash: Argument list too long

A [Google search](https://www.google.com/search?q=spark%20%22%2Fbin%2Fbash%3A%20argument%20list%20too%20long%22&oq=spark%20%22%2Fbin%2Fbash%3A%20argument%20list%20too%20long%22) indicates that this is not a theoretical problem and afflicts real users, including ours. Passing this list using the configurations instead resolves this issue.

### Does this PR introduce _any_ user-facing change?
No, except for fixing the bug, allowing for larger JAR lists to be passed successfully. Configuration of JARs is identical to before.

### How was this patch tested?
New unit tests were added in `YarnClusterSuite`. Also, we have been running a similar fix internally for 4 months with great success.

Closes #32810 from xkrogen/xkrogen-SPARK-35672-classpath-scalable.

Authored-by: Erik Krogen <xkrogen@apache.org>
Signed-off-by: Thomas Graves <tgraves@apache.org>
2021-06-25 08:53:57 -05:00
Steve Loughran 36aaaa14c3 [SPARK-35878][CORE] Add fs.s3a.endpoint if unset and fs.s3a.endpoint.region is null
### What changes were proposed in this pull request?

This patches the hadoop configuration so that fs.s3a.endpoint is set to
s3.amazonaws.com if neither it nor fs.s3a.endpoint.region is set.

This stops S3A Filesystem creation failing with the error
"Unable to find a region via the region provider chain."
in some non-EC2 deployments.

See: HADOOP-17771.

when spark options are propagated to the hadoop configuration
in SparkHadoopUtils. the fs.s3a.endpoint value is set to
"s3.amazonaws.com" if unset and no explicit region
is set in fs.s3a.endpoint.region.

### Why are the changes needed?

A regression in Hadoop 3.3.1 has surfaced which causes S3A filesystem
instantiation to fail outside EC2 deployments if the host lacks
a CLI configuration in ~/.aws/config declaring the region, or
the `AWS_REGION` environment variable

HADOOP-17771 fixes this in Hadoop-3.3.2+, but
this spark patch will correct the behavior when running
Spark with the 3.3.1 artifacts.

It is harmless for older versions and compatible
with hadoop releases containing the HADOOP-17771
fix.

### Does this PR introduce _any_ user-facing change?

No

### How was this patch tested?

New tests to verify propagation logic from spark conf to hadoop conf.

Closes #33064 from steveloughran/SPARK-35878-regions.

Authored-by: Steve Loughran <stevel@cloudera.com>
Signed-off-by: Dongjoon Hyun <dongjoon@apache.org>
2021-06-25 05:24:55 -07:00
Gengliang Wang 9814cf8853 [SPARK-35889][SQL] Support adding TimestampWithoutTZ with Interval types
### What changes were proposed in this pull request?

Supprot the following operations:

- TimestampWithoutTZ + Calendar interval
- TimestampWithoutTZ + Year-Month interval
- TimestampWithoutTZ + Daytime interval

### Why are the changes needed?

Support basic '+' operator for timestamp without time zone type.

### Does this PR introduce _any_ user-facing change?

No, the timestamp without time zone type is not release yet.

### How was this patch tested?

Unit tests

Closes #33076 from gengliangwang/addForNewTS.

Authored-by: Gengliang Wang <gengliang@apache.org>
Signed-off-by: Gengliang Wang <gengliang@apache.org>
2021-06-25 19:58:42 +08:00
Yuanjian Li f2029e7442 [SPARK-35628][SS] RocksDBFileManager - load checkpoint from DFS
### What changes were proposed in this pull request?
The implementation for the load operation of RocksDBFileManager.

### Why are the changes needed?
Provide the functionality of loading all necessary files for specific checkpoint versions from DFS to the given local directory.

### Does this PR introduce _any_ user-facing change?
No.

### How was this patch tested?
New UT added.

Closes #32767 from xuanyuanking/SPARK-35628.

Authored-by: Yuanjian Li <yuanjian.li@databricks.com>
Signed-off-by: Jungtaek Lim <kabhwan.opensource@gmail.com>
2021-06-25 18:38:26 +09:00
Wenchen Fan c0cfbb1743 [SPARK-35884][SQL] EXPLAIN FORMATTED for AQE
### What changes were proposed in this pull request?

This is a followup of https://github.com/apache/spark/pull/29137 , which has some issues when running EXPLAIN FORMATTED
```
AdaptiveSparkPlan (13)
+- == Final Plan ==
   * HashAggregate (12)
   +- CustomShuffleReader (11)
      +- ShuffleQueryStage (10)
         +- Exchange (9)
            +- * HashAggregate (8)
               +- * Project (7)
                  +- * BroadcastHashJoin Inner BuildRight (6)
                     :- * LocalTableScan (1)
                     +- BroadcastQueryStage (5)
                        +- BroadcastExchange (4)
                           +- * Project (3)
                              +- * LocalTableScan (2)
+- == Initial Plan ==
   HashAggregate (unknown)
   +- Exchange (unknown)
      +- HashAggregate (unknown)
         +- Project (unknown)
            +- BroadcastHashJoin Inner BuildRight (unknown)
               :- Project (unknown)
               :  +- LocalTableScan (unknown)
               +- BroadcastExchange (unknown)
                  +- Project (3)
                     +- LocalTableScan (2)
```

Some nodes do not have an ID and show `unknown`. This PR fixes the issue.

### Why are the changes needed?

bug fix

### Does this PR introduce _any_ user-facing change?

EXPLAIN FORMATTED with AQE displays correctly.

### How was this patch tested?

new tests

Closes #33067 from cloud-fan/explain.

Authored-by: Wenchen Fan <wenchen@databricks.com>
Signed-off-by: Liang-Chi Hsieh <viirya@gmail.com>
2021-06-25 00:18:26 -07:00
Terry Kim f1ad34558c [SPARK-35883][SQL] Migrate ALTER TABLE RENAME COLUMN command to use UnresolvedTable to resolve the identifier
### What changes were proposed in this pull request?

This PR proposes to migrate the following `ALTER TABLE ... RENAME COLUMN` command to use `UnresolvedTable` as a `child` to resolve the table identifier. This allows consistent resolution rules (temp view first, etc.) to be applied for both v1/v2 commands. More info about the consistent resolution rule proposal can be found in [JIRA](https://issues.apache.org/jira/browse/SPARK-29900) or [proposal doc](https://docs.google.com/document/d/1hvLjGA8y_W_hhilpngXVub1Ebv8RsMap986nENCFnrg/edit?usp=sharing).

### Why are the changes needed?

This is a part of effort to make the relation lookup behavior consistent: [SPARK-29900](https://issues.apache.org/jira/browse/SPARK-29900).

### Does this PR introduce _any_ user-facing change?

After this PR, the above `ALTER TABLE ... RENAME COLUMN` commands will have a consistent resolution behavior.

### How was this patch tested?

Updated existing tests.

Closes #33066 from imback82/alter_rename.

Authored-by: Terry Kim <yuminkim@gmail.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
2021-06-25 05:53:56 +00:00
Takuya UESHIN 6497ac3585 [SPARK-35471][PYTHON] Fix disallow_untyped_defs mypy checks for pyspark.pandas.frame
### What changes were proposed in this pull request?

Adds more type annotations in the file `python/pyspark/pandas/frame.py` and fixes the mypy check failures.

### Why are the changes needed?

We should enable more disallow_untyped_defs mypy checks.

### Does this PR introduce _any_ user-facing change?

Yes.
This PR adds more type annotations in pandas APIs on Spark module, which can impact interaction with development tools for users.

### How was this patch tested?

The mypy check with a new configuration and existing tests should pass.

Closes #33073 from ueshin/issues/SPARK-35471/disallow_untyped_defs_frame.

Authored-by: Takuya UESHIN <ueshin@databricks.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
2021-06-25 14:41:58 +09:00
William Hyun c6555f1845 [SPARK-35887][BUILD] Find and set JAVA_HOME from javac location
### What changes were proposed in this pull request?
This PR aims to find and set JAVA_HOME from the javac location.

### Why are the changes needed?
Since SPARK-35850, maven compile fails with Java8 when there is no JAVA_HOME and `java` path is JRE java.

### Does this PR introduce _any_ user-facing change?
No.

### How was this patch tested?
Manually run mvn in vanilla ubuntu.

Closes #33075 from williamhyun/util.

Authored-by: William Hyun <william@apache.org>
Signed-off-by: Dongjoon Hyun <dongjoon@apache.org>
2021-06-24 21:09:18 -07:00
Kousuke Saruta c562c1674e [SPARK-34320][SQL][FOLLOWUP] Modify V2JDBCTest to follow the change of the error message
### What changes were proposed in this pull request?

This is a followup PR for SPARK-34320 (#32854).
That PR changed the error message of `ALTER TABLE` but `V2JDBCTest` didn't comply with the change.

### Why are the changes needed?

To fix`v2.*JDBCSuite` failure.
```
[info] - SPARK-33034: ALTER TABLE ... add new columns (173 milliseconds)
[info] - SPARK-33034: ALTER TABLE ... drop column *** FAILED *** (126 milliseconds)
[info]   "Cannot delete missing field bad_column in postgresql.alt_table schema: root
[info]    |-- C2: string (nullable = true)
[info]   ; line 1 pos 0;
[info]   'AlterTableDropColumns [unresolvedfieldname(bad_column)]
[info]   +- ResolvedTable org.apache.spark.sql.execution.datasources.v2.jdbc.JDBCTableCatalog7f4b7516, alt_table, JDBCTable(alt_table,StructType(StructField(C2,StringType,true)),org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions5842301d), [C2#1879]
[info]   " did not contain "Cannot delete missing field bad_column in alt_table schema" (V2JDBCTest.scala:106)
[info]   org.scalatest.exceptions.TestFailedException:
[info]   at org.scalatest.Assertions.newAssertionFailedException(Assertions.scala:472)
[info]   at org.scalatest.Assertions.newAssertionFailedException$(Assertions.scala:471)
[info]   at org.scalatest.Assertions$.newAssertionFailedException(Assertions.scala:1231)
[info]   at org.scalatest.Assertions$AssertionsHelper.macroAssert(Assertions.scala:1295)
[info]   at org.apache.spark.sql.jdbc.v2.V2JDBCTest.$anonfun$$init$$6(V2JDBCTest.scala:106)
[info]   at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
[info]   at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1461)
[info]   at org.apache.spark.sql.test.SQLTestUtilsBase.withTable(SQLTestUtils.scala:305)
[info]   at org.apache.spark.sql.test.SQLTestUtilsBase.withTable$(SQLTestUtils.scala:303)
[info]   at org.apache.spark.sql.jdbc.DockerJDBCIntegrationSuite.withTable(DockerJDBCIntegrationSuite.scala:95)
[info]   at org.apache.spark.sql.jdbc.v2.V2JDBCTest.$anonfun$$init$$5(V2JDBCTest.scala:95)
[info]   at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
[info]   at org.scalatest.OutcomeOf.outcomeOf$(OutcomeOf.scala:83)
[info]   at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
[info]   at org.scalatest.Transformer.apply(Transformer.scala:22)
[info]   at org.scalatest.Transformer.apply(Transformer.scala:20)
[info]   at org.scalatest.funsuite.AnyFunSuiteLike$$anon$1.apply(AnyFunSuiteLike.scala:190)
[info]   at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:190)
[info]   at org.scalatest.funsuite.AnyFunSuiteLike.invokeWithFixture$1(AnyFunSuiteLike.scala:188)
[info]   at org.scalatest.funsuite.AnyFunSuiteLike.$anonfun$runTest$1(AnyFunSuiteLike.scala:200)
[info]   at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306)
[info]   at org.scalatest.funsuite.AnyFunSuiteLike.runTest(AnyFunSuiteLike.scala:200)
[info]   at org.scalatest.funsuite.AnyFunSuiteLike.runTest$(AnyFunSuiteLike.scala:182)
[info]   at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterEach$$super$runTest(SparkFunSuite.scala:62)
[info]   at org.scalatest.BeforeAndAfterEach.runTest(BeforeAndAfterEach.scala:234)
[info]   at org.scalatest.BeforeAndAfterEach.runTest$(BeforeAndAfterEach.scala:227)
[info]   at org.apache.spark.SparkFunSuite.runTest(SparkFunSuite.scala:62)
[info]   at org.scalatest.funsuite.AnyFunSuiteLike.$anonfun$runTests$1(AnyFunSuiteLike.scala:233)
[info]   at org.scalatest.SuperEngine.$anonfun$runTestsInBranch$1(Engine.scala:413)
[info]   at scala.collection.immutable.List.foreach(List.scala:431)
[info]   at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
[info]   at org.scalatest.SuperEngine.runTestsInBranch(Engine.scala:396)
[info]   at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:475)
[info]   at org.scalatest.funsuite.AnyFunSuiteLike.runTests(AnyFunSuiteLike.scala:233)
[info]   at org.scalatest.funsuite.AnyFunSuiteLike.runTests$(AnyFunSuiteLike.scala:232)
[info]   at org.scalatest.funsuite.AnyFunSuite.runTests(AnyFunSuite.scala:1563)
[info]   at org.scalatest.Suite.run(Suite.scala:1112)
[info]   at org.scalatest.Suite.run$(Suite.scala:1094)
[info]   at org.scalatest.funsuite.AnyFunSuite.org$scalatest$funsuite$AnyFunSuiteLike$$super$run(AnyFunSuite.scala:1563)
[info]   at org.scalatest.funsuite.AnyFunSuiteLike.$anonfun$run$1(AnyFunSuiteLike.scala:237)
[info]   at org.scalatest.SuperEngine.runImpl(Engine.scala:535)
[info]   at org.scalatest.funsuite.AnyFunSuiteLike.run(AnyFunSuiteLike.scala:237)
[info]   at org.scalatest.funsuite.AnyFunSuiteLike.run$(AnyFunSuiteLike.scala:236)
[info]   at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterAll$$super$run(SparkFunSuite.scala:62)
[info]   at org.scalatest.BeforeAndAfterAll.liftedTree1$1(BeforeAndAfterAll.scala:213)
[info]   at org.scalatest.BeforeAndAfterAll.run(BeforeAndAfterAll.scala:210)
[info]   at org.scalatest.BeforeAndAfterAll.run$(BeforeAndAfterAll.scala:208)
[info]   at org.apache.spark.SparkFunSuite.run(SparkFunSuite.scala:62)
[info]   at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:318)
[info]   at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:513)
[info]   at sbt.ForkMain$Run.lambda$runTest$1(ForkMain.java:413)
[info]   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
[info]   at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
[info]   at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
[info]   at java.lang.Thread.run(Thread.java:748)
[info] - SPARK-33034: ALTER TABLE ... update column type (122 milliseconds)
[info] - SPARK-33034: ALTER TABLE ... rename column (93 milliseconds)
[info] - SPARK-33034: ALTER TABLE ... update column nullability (92 milliseconds)
[info] - CREATE TABLE with table comment (38 milliseconds)
[info] - CREATE TABLE with table property (52 milliseconds)
[info] MySQLIntegrationSuite:
[info] - Basic test (61 milliseconds)
[info] - Numeric types (67 milliseconds)
[info] - Date types (59 milliseconds)
[info] - String types (50 milliseconds)
[info] - Basic write test (216 milliseconds)
[info] - query JDBC option (64 milliseconds)
[info] Run completed in 19 minutes, 43 seconds.
[info] Total number of tests run: 89
[info] Suites: completed 14, aborted 0
[info] Tests: succeeded 84, failed 5, canceled 0, ignored 0, pending 0
[info] *** 5 TESTS FAILED ***
[error] Failed tests:
[error] 	org.apache.spark.sql.jdbc.v2.OracleIntegrationSuite
[error] 	org.apache.spark.sql.jdbc.v2.MsSqlServerIntegrationSuite
[error] 	org.apache.spark.sql.jdbc.v2.DB2IntegrationSuite
[error] 	org.apache.spark.sql.jdbc.v2.MySQLIntegrationSuite
[error] 	org.apache.spark.sql.jdbc.v2.PostgresIntegrationSuite
[error] (docker-integration-tests / Test / test) sbt.TestsFailedException: Tests unsuccessful
[error] Total time: 1223 s (20:23), completed Jun 25, 2021 1:31:04 AM
```

### Does this PR introduce _any_ user-facing change?

No.

### How was this patch tested?

docker-integration-tests on GA.

Closes #33074 from sarutak/followup-SPARK-34320.

Authored-by: Kousuke Saruta <sarutak@oss.nttdata.com>
Signed-off-by: Kousuke Saruta <sarutak@oss.nttdata.com>
2021-06-25 12:58:38 +09:00
Dongjoon Hyun 59ec7a20b0 [SPARK-35885][K8S][R] Use keyserver.ubuntu.com as a keyserver for CRAN
### What changes were proposed in this pull request?

This PR aims to use `keyserver.ubuntu.com` as a keyserver for CRAN.

### Why are the changes needed?

Currently, both servers fail and K8s IT fails at SparkR image building phase.
- https://amplab.cs.berkeley.edu/jenkins/view/Spark%20K8s%20Builds/job/spark-master-test-k8s/801/console
```
$ docker run -it --rm openjdk:11 /bin/bash
root3e89a8d05378:/# echo "deb http://cloud.r-project.org/bin/linux/debian buster-cran35/" >> /etc/apt/sources.list
root3e89a8d05378:/# (apt-key adv --keyserver keys.gnupg.net --recv-key 'E19F5F87128899B192B1A2C2AD5F960A256A04AF' || apt-key adv --keyserver keys.openpgp.org --recv-key 'E19F5F87128899B192B1A2C2AD5F960A256A04AF')
Executing: /tmp/apt-key-gpghome.8lNIiUuhoE/gpg.1.sh --keyserver keys.gnupg.net --recv-key E19F5F87128899B192B1A2C2AD5F960A256A04AF
gpg: keyserver receive failed: No name
Executing: /tmp/apt-key-gpghome.stxb8XUlx8/gpg.1.sh --keyserver keys.openpgp.org --recv-key E19F5F87128899B192B1A2C2AD5F960A256A04AF
gpg: key AD5F960A256A04AF: new key but contains no user ID - skipped
gpg: Total number processed: 1
gpg:           w/o user IDs: 1
root3e89a8d05378:/# apt-get update
...
Err:3 http://cloud.r-project.org/bin/linux/debian buster-cran35/ InRelease
  The following signatures couldn't be verified because the public key is not available: NO_PUBKEY FCAE2A0E115C3D8A
...
W: GPG error: http://cloud.r-project.org/bin/linux/debian buster-cran35/ InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY FCAE2A0E115C3D8A
E: The repository 'http://cloud.r-project.org/bin/linux/debian buster-cran35/ InRelease' is not signed.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.
```

`keyserver.ubuntu.com` is a recommended backup server in CRAN document.
- http://cloud.r-project.org/bin/linux/debian/
```
$ docker run -it --rm openjdk:11 /bin/bash
rootc9b183e45ffe:/# echo "deb http://cloud.r-project.org/bin/linux/debian buster-cran35/" >> /etc/apt/sources.list
rootc9b183e45ffe:/# apt-key adv --keyserver keyserver.ubuntu.com --recv-key 'E19F5F87128899B192B1A2C2AD5F960A256A04AF'
Executing: /tmp/apt-key-gpghome.P6cxYkOge7/gpg.1.sh --keyserver keyserver.ubuntu.com --recv-key E19F5F87128899B192B1A2C2AD5F960A256A04AF
gpg: key AD5F960A256A04AF: public key "Johannes Ranke (Wissenschaftlicher Berater) <johannes.rankejrwb.de>" imported
gpg: Total number processed: 1
gpg:               imported: 1
rootc9b183e45ffe:/# apt-get update
Get:1 http://deb.debian.org/debian buster InRelease [122 kB]
Get:2 http://security.debian.org/debian-security buster/updates InRelease [65.4 kB]
Get:3 http://cloud.r-project.org/bin/linux/debian buster-cran35/ InRelease [4375 B]
Get:4 http://deb.debian.org/debian buster-updates InRelease [51.9 kB]
Get:5 http://cloud.r-project.org/bin/linux/debian buster-cran35/ Packages [53.3 kB]
Get:6 http://security.debian.org/debian-security buster/updates/main arm64 Packages [287 kB]
Get:7 http://deb.debian.org/debian buster/main arm64 Packages [7735 kB]
Get:8 http://deb.debian.org/debian buster-updates/main arm64 Packages [14.5 kB]
Fetched 8334 kB in 2s (4537 kB/s)
Reading package lists... Done
```

### Does this PR introduce _any_ user-facing change?

No.

### How was this patch tested?

Pass K8s IT Jenkins SparkR image building. Or, manually do the following.

```
$ bin/docker-image-tool.sh -R kubernetes/dockerfiles/spark/bindings/R/Dockerfile build
```

Closes #33071 from dongjoon-hyun/SPARK-35885.

Authored-by: Dongjoon Hyun <dongjoon@apache.org>
Signed-off-by: Dongjoon Hyun <dongjoon@apache.org>
2021-06-24 19:51:49 -07:00
Carlos Peña c22f17c573 [DOCS][MINOR] Update sql-performance-tuning.md
### What changes were proposed in this pull request?

Update "Caching Data in Memory" section, add suggestion to call DataFrame `unpersist` method to make it consistent with previous suggestion of using `persist` method.

### Why are the changes needed?

Keep documentation consistent.

### Does this PR introduce _any_ user-facing change?

Yes, fixes the user-facing docs.

### How was this patch tested?

Manually.

Closes #33069 from Silverlight42/caching-data-doc.

Authored-by: Carlos Peña <Cdpm42@gmail.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
2021-06-25 11:19:39 +09:00
Cheng Su 2da42ca3b4 [SPARK-33298][CORE] Introduce new API to FileCommitProtocol allow flexible file naming
### What changes were proposed in this pull request?

This PR is to introduce a new sets of APIs `newTaskTempFile` and `newTaskTempFileAbsPath` inside `FileCommitProtocol`, to allow more flexible file naming of Spark output. The major change is to pass `FileNameSpec` into `FileCommitProtocol`, instead of original `ext` (currently having `prefix` and `ext`), to allow individual `FileCommitProtocol` implementation comes up with more flexible file names (e.g. has a custom `prefix`) for Hive/Presto bucketing - https://github.com/apache/spark/pull/30003. Provide a default implementations of the added APIs, so all existing implementation of `FileCommitProtocol` is NOT being broken.

### Why are the changes needed?

To make commit protocol more flexible in terms of Spark output file name.
Pre-requisite of https://github.com/apache/spark/pull/30003.

### Does this PR introduce _any_ user-facing change?

Yes for developers  who implement/run custom implementation of `FileCommitProtocol`. They can choose to implement for the newly added API.

### How was this patch tested?

Existing unit tests as this is just adding an API.

Closes #33012 from c21/commit-protocol-api.

Authored-by: Cheng Su <chengsu@fb.com>
Signed-off-by: Dongjoon Hyun <dongjoon@apache.org>
2021-06-24 17:10:54 -07:00
Wenchen Fan 95ba000279 [SPARK-35872][INFRA] Automatize some steps to finalize the release
### What changes were proposed in this pull request?

After the RC vote, the release manager still need to do many work to finalize the release. This PR updates the script the automatize some steps:
1. create the final git tag
2. publish to pypi
3. publish docs to spark-website
4. move the release binaries from dev directory to release directory.
5. update the KEYS file

### Why are the changes needed?

easy the work of release manager.

### Does this PR introduce _any_ user-facing change?

no

### How was this patch tested?

tested with the recent 3.0.3.

Closes #33055 from cloud-fan/release.

Authored-by: Wenchen Fan <wenchen@databricks.com>
Signed-off-by: Dongjoon Hyun <dhyun@apple.com>
2021-06-24 13:25:41 -07:00
Kousuke Saruta 156b9b5d14 [SPARK-35736][SPARK-35774][SQL][FOLLOWUP] Prohibit to specify the same units for FROM and TO with unit-to-unit interval syntax
### What changes were proposed in this pull request?

This PR change the behavior of unit-to-unit interval syntax to prohibit the case that the same units are specified for FROM and TO.

### Why are the changes needed?

For ANSI compliance.

### Does this PR introduce _any_ user-facing change?

No.

### How was this patch tested?

New test.

Closes #33057 from sarutak/prohibit-unit-pattern.

Authored-by: Kousuke Saruta <sarutak@oss.nttdata.com>
Signed-off-by: Max Gekk <max.gekk@gmail.com>
2021-06-24 23:13:31 +03:00
Kousuke Saruta 7b78d56f34 [SPARK-35870][BUILD] Upgrade Jetty to 9.4.42
### What changes were proposed in this pull request?

This PR upgrades Jetty to `9.4.42`.
In the current master, `9.4.40` is used.
`9.4.41` and `9.4.42` include the following updates.
https://github.com/eclipse/jetty.project/releases/tag/jetty-9.4.41.v20210516
https://github.com/eclipse/jetty.project/releases/tag/jetty-9.4.42.v20210604

### Why are the changes needed?

Mainly for CVE-2021-28169.
https://nvd.nist.gov/vuln/detail/CVE-2021-28169
This CVE might little affect Spark, but just in case.

### Does this PR introduce _any_ user-facing change?

No.

### How was this patch tested?

CI.

Closes #33053 from sarutak/upgrade-jetty-9.4.42.

Authored-by: Kousuke Saruta <sarutak@oss.nttdata.com>
Signed-off-by: Kousuke Saruta <sarutak@oss.nttdata.com>
2021-06-25 03:32:32 +09:00
Adam Binford 14b1836313 [SPARK-35290][SQL] Append new nested struct fields rather than sort for unionByName with null filling
### What changes were proposed in this pull request?

This PR changes the unionByName with null filling logic to append new nested struct fields from the right side of the union to the schema versus sorting fields alphabetically. It removes the need to use UpdateField expressions, and just directly projects new nested structs from each side of the union with the correct schema. This changes the union'd schema from being alphabetically sorted previously to now "left dominant", where the fields from the left side of the union are included and then the missing ones from the right are added in the same order found originally.

### Why are the changes needed?

Certain nested structs would cause unionByName with null filling to error out due to part of the logic for rewriting the expression tree to sort the structs.

### Does this PR introduce _any_ user-facing change?

Yes, nested struct fields will be in a different order after unionByName with null filling than before, though shouldn't cause much effective difference.

### How was this patch tested?

Updated existing tests based on the new StructField ordering and added a new test for the case that was broken originally.

Closes #33040 from Kimahriman/union-by-name-struct-order.

Authored-by: Adam Binford <adamq43@gmail.com>
Signed-off-by: Liang-Chi Hsieh <viirya@gmail.com>
2021-06-24 09:21:30 -07:00
Terry Kim 5b4816cfc8 [SPARK-34320][SQL] Migrate ALTER TABLE DROP COLUMNS commands to use UnresolvedTable to resolve the identifier
### What changes were proposed in this pull request?

This PR proposes to migrate the following `ALTER TABLE ... DROP COLUMNS` command to use `UnresolvedTable` as a `child` to resolve the table identifier. This allows consistent resolution rules (temp view first, etc.) to be applied for both v1/v2 commands. More info about the consistent resolution rule proposal can be found in [JIRA](https://issues.apache.org/jira/browse/SPARK-29900) or [proposal doc](https://docs.google.com/document/d/1hvLjGA8y_W_hhilpngXVub1Ebv8RsMap986nENCFnrg/edit?usp=sharing).

### Why are the changes needed?

This is a part of effort to make the relation lookup behavior consistent: [SPARK-29900](https://issues.apache.org/jira/browse/SPARK-29900).

### Does this PR introduce _any_ user-facing change?

After this PR, the above `ALTER TABLE ... DROP COLUMNS` commands will have a consistent resolution behavior.

### How was this patch tested?

Updated existing tests.

Closes #32854 from imback82/alter_alternative.

Authored-by: Terry Kim <yuminkim@gmail.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
2021-06-24 14:59:25 +00:00
Angerszhuuuu de35675c61 [SPARK-35871][SQL] Literal.create(value, dataType) should support fields
### What changes were proposed in this pull request?
Current Literal.create(data, dataType) for Period to YearMonthIntervalType and Duration to DayTimeIntervalType is not correct.

if data type is Period/Duration, it will create converter of default YearMonthIntervalType/DayTimeIntervalType,  then the result is not correct, this pr fix this bug.

### Why are the changes needed?
Fix  bug when use Literal.create()

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
Added UT

Closes #33056 from AngersZhuuuu/SPARK-35871.

Authored-by: Angerszhuuuu <angers.zhu@gmail.com>
Signed-off-by: Max Gekk <max.gekk@gmail.com>
2021-06-24 17:36:48 +03:00
Max Gekk d40a1a2552 Revert "[SPARK-35728][SQL][TESTS] Check multiply/divide of day-time intervals of any fields by numeric"
### What changes were proposed in this pull request?
Revert 8a1995f936

### Why are the changes needed?
The merged test doesn't check different interval fields, actually. Need to apply this https://github.com/apache/spark/pull/33056 first of all.

### Does this PR introduce _any_ user-facing change?
No. This is tests.

### How was this patch tested?
By existing GAs.

Closes #33060 from MaxGekk/revert-Peng-Lei-SPARK-35728.

Authored-by: Max Gekk <max.gekk@gmail.com>
Signed-off-by: Max Gekk <max.gekk@gmail.com>
2021-06-24 14:36:07 +03:00
Max Gekk 345d3db83d Revert "[SPARK-35778][SQL][TESTS] Check multiply/divide of year month interval of any fields by numeric"
### What changes were proposed in this pull request?
Revert 3904c0edba

### Why are the changes needed?
The merged test doesn't check different interval fields, actually. Need to apply this https://github.com/apache/spark/pull/33056 first of all.

### Does this PR introduce _any_ user-facing change?
No. This is tests.

### How was this patch tested?
By existing GAs.

Closes #33059 from MaxGekk/revert-Peng-Lei-SPARK-35778.

Authored-by: Max Gekk <max.gekk@gmail.com>
Signed-off-by: Max Gekk <max.gekk@gmail.com>
2021-06-24 14:34:42 +03:00
Takuya UESHIN cfcfbca965 [SPARK-35476][PYTHON] Fix disallow_untyped_defs mypy checks for pyspark.pandas.series
### What changes were proposed in this pull request?

Adds more type annotations in the file `python/pyspark/pandas/series.py` and fixes the mypy check failures.

### Why are the changes needed?

We should enable more disallow_untyped_defs mypy checks.

### Does this PR introduce _any_ user-facing change?

Yes.
This PR adds more type annotations in pandas APIs on Spark module, which can impact interaction with development tools for users.

### How was this patch tested?

The mypy check with a new configuration and existing tests should pass.

Closes #33045 from ueshin/issues/SPARK-35476/disallow_untyped_defs_series.

Authored-by: Takuya UESHIN <ueshin@databricks.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
2021-06-24 19:32:33 +09:00
PengLei 3904c0edba [SPARK-35778][SQL][TESTS] Check multiply/divide of year month interval of any fields by numeric
### What changes were proposed in this pull request?
Check multiply/divide of year-month intervals of any fields by numeric.

### Why are the changes needed?
To improve test coverage.

### Does this PR introduce _any_ user-facing change?
No.

### How was this patch tested?
Expanded existed test cases.

Closes #33051 from Peng-Lei/SPARK-35778.

Authored-by: PengLei <18066542445@189.cn>
Signed-off-by: Max Gekk <max.gekk@gmail.com>
2021-06-24 12:25:06 +03:00
PengLei 8a1995f936 [SPARK-35728][SQL][TESTS] Check multiply/divide of day-time intervals of any fields by numeric
### What changes were proposed in this pull request?
1. The testcase is just cover the DayTimeIntervalType() */ numeric
2. Add testcase for following intervals */ numeric:
   INTERVAL DAY
   INTERVAL DAY TO HOUR
   INTERVAL DAY TO MINUTE
   INTERVAL HOUR
   INTERVAL HOUR TO MINUTE
   INTERVAL HOUR TO SECOND
   INTERVAL MINUTE
   INTERVAL MINUTE TO SECOND
   INTERVAL SECOND

### Why are the changes needed?
Add testcase coverage.

### Does this PR introduce _any_ user-facing change?
no

### How was this patch tested?
existed testcase

Closes #33014 from Peng-Lei/SPARK-35728.

Authored-by: PengLei <18066542445@189.cn>
Signed-off-by: Max Gekk <max.gekk@gmail.com>
2021-06-24 12:11:47 +03:00
ulysses-you 1295e8876c [SPARK-35786][SQL] Add a new operator to distingush if AQE can optimize safely
### What changes were proposed in this pull request?

* Add a new repartition operator `RebalanceRepartition`.
* Support a new hint `REBALANCE`

After this patch, user can run this query:
```sql
SELECT /*+ REBALANCE(c) */ * FROM t
```

### Why are the changes needed?

Add a new hint to distingush if we can optimize it safely.

This new hint can let AQE optimize with `CustomShuffleReaderExec` safely. Currently, AQE can only coalesce shuffle partitions but can not expand shuffle partitions due to the semantics of output partitioning.
Let's say we have a query:
```sql
SELECT /*+ REPARTITION(col) */ * FROM t
```
AQE can not expand the shuffle partitions even if `col` is skewed because expanding shuffle partitions will break the hashed output paritioning of `RepartitionByExpression`. But if the query is use`REPARTITION_BY_AQE`, AQE can optimize it without considering the semantics of output partitioning.

### Does this PR introduce _any_ user-facing change?

Yes, a new hint.

### How was this patch tested?

Add test.

Closes #32932 from ulysses-you/SPARK-35786.

Authored-by: ulysses-you <ulyssesyou18@gmail.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
2021-06-24 09:04:38 +00:00
Hyukjin Kwon 5a7686a393 [SPARK-35301][PYTHON][DOCS] Document migration guide from Koalas to pandas APIs on Spark
### What changes were proposed in this pull request?

This PR proposes to add a migration guide for legacy Koalas users in pandas API on Spark.

### Why are the changes needed?

For easier migration.

### Does this PR introduce _any_ user-facing change?

Yes, this adds a new page for migration from Koalas.

### How was this patch tested?

Manually built the docs and checked manually.

Closes #33050 from HyukjinKwon/SPARK-35301.

Authored-by: Hyukjin Kwon <gurwls223@apache.org>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
2021-06-24 17:58:09 +09:00
dgd-contributor 5b9c5c126f [SPARK-35841][SQL] Casting string to decimal type doesn't work if the…
… sum of the digits is greater than 38
### What changes were proposed in this pull request?
Since Spark 3.1.1, NULL is returned when casting a string with many decimal places to a decimal type. If the sum of the digits before and after the decimal point is less than 39, a value is returned. From 39 digits, however, NULL is returned.
This worked until Spark 3.0.X.

Code to reproduce:

A string with 2 decimal places in front of the decimal point and 37 decimal places after the decimal point returns null

```
val data = Seq(
      "28.9259999999999983799625624669715762138",
      "28.925999999999998379962562466971576213",
      "2.9259999999999983799625624669715762138"
      )
val df = data.toDF("num")
df.withColumn("numConverted", col("num").cast("decimal(38, 5)")).show()
```

before this pull request, the result is
+----------------------+---------------+
|                 num          |numConverted|
+----------------------+---------------+
|28.92599999999999...|                  null|
|28.92599999999999...|         28.92600|
|2.925999999999998...|           2.92600|
+----------------------+---------------+

the correct result should be
+----------------------+---------------+
|                 num          |numConverted|
+----------------------+---------------+
|28.92599999999999...|         28.92600|
|28.92599999999999...|         28.92600|
|2.925999999999998...|           2.92600|
+----------------------+---------------+

The problem occur since https://issues.apache.org/jira/browse/SPARK-32706, it because the fast fail is checking precision length, which should only check the whole number part length of the input value, not the precision length

### Why are the changes needed?
correctness

### Does this PR introduce _any_ user-facing change?
no

### How was this patch tested?
test added

Closes #33011 from dgd-contributor/SPARK-35841_castStringToDecimalTypeError.

Authored-by: dgd-contributor <dgd_contributor@viettel.com.vn>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
2021-06-24 16:44:58 +08:00
ulysses-you ff9ba89dcb [SPARK-35282][SQL][FOLLOWUP] Simplify condition code of shuffled hash join
### What changes were proposed in this pull request?

Simplify the condition code which is introduced by [SPARK-35282](https://issues.apache.org/jira/browse/SPARK-35282).

### Why are the changes needed?

Reduce the code size and make code more readable.

### Does this PR introduce _any_ user-facing change?

No

### How was this patch tested?

Pass CI

Closes #33046 from ulysses-you/simplify-shj.

Authored-by: ulysses-you <ulyssesyou18@gmail.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
2021-06-24 08:42:24 +00:00
Angerszhuuuu 5e77ca8071 [SPARK-35768][SQL] Take into account year-month interval fields in cast
### What changes were proposed in this pull request?
Support take into account year-month interval field in cast

##### Rule cast to target YearMonthIntervalType

|  string  | demo | strict target type  |   months |
|---|---|---|---|
|  [+\|-]y-m | 1-1  | YearMonthIntervalType(YEAR. MONTH) | 13  |
| [+\|-]y| 1 | YearMonthIntervalType(YEAR. YEAR) | 12  |
| [+\|-]m | 1 | YearMonthIntervalType(MONTH. MONTH) | 1  |
|  INTERVAL [+\|-]'[+\|-]y-m' YEAR TO MONTH | interval '1-1' year to month | YearMonthIntervalType(YEAR. MONTH) | 13  |
|  INTERVAL [+\|-]'[+\|-]m' MONTH | interval '1' month | YearMonthIntervalType(MONTH. MONTH) |  1 |
|  INTERVAL [+\|-]'[+\|-]y' YEAR | interval '1' year | YearMonthIntervalType(YEAR.YEAR) | 12 |

### Why are the changes needed?
Support take into account year-month interval field in cast

### Does this PR introduce _any_ user-facing change?
user can use `cast(str, YearMonthInterval(YEAR, YEAR))` etc

### How was this patch tested?
Added UT

Closes #32940 from AngersZhuuuu/SPARK-35768.

Lead-authored-by: Angerszhuuuu <angers.zhu@gmail.com>
Co-authored-by: AngersZhuuuu <angers.zhu@gmail.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
2021-06-24 07:48:47 +00:00
AngersZhuuuu 7d0786f535 [SPARK-35730][SQL][TESTS] Check all day-time interval types in UDF
### What changes were proposed in this pull request?
Check all day-time interval types in UDF.

### Why are the changes needed?
New checks should improve test coverage.

### Does this PR introduce _any_ user-facing change?
Yes but `DayTimeIntervalType` has not been released yet.

### How was this patch tested?
Existed UT.

Closes #33047 from AngersZhuuuu/SPARK-35730.

Lead-authored-by: AngersZhuuuu <angers.zhu@gmail.com>
Co-authored-by: Angerszhuuuu <angers.zhu@gmail.com>
Signed-off-by: Max Gekk <max.gekk@gmail.com>
2021-06-24 10:42:20 +03:00
itholic 92ddef7cfb [SPARK-35696][PYTHON][DOCS][FOLLOW-UP] Fix underline for title in FAQ to remove warnings
### What changes were proposed in this pull request?

This PR follow-up for SPARK-35696 to fix incorrect underline in the documents to remove warnings.

### Why are the changes needed?

We should build the docs without any incorrect documentation style

### Does this PR introduce _any_ user-facing change?

No

### How was this patch tested?

Manually build docs and see the warning is removed

Closes #33052 from itholic/SPARK-35696-followup.

Authored-by: itholic <haejoon.lee@databricks.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
2021-06-24 15:20:13 +09:00
Vinod KC 4dabba8f76 [SPARK-35747][CORE] Avoid printing full Exception stack trace, if Hbase/Kafka/Hive services are not running in a secure cluster
### What changes were proposed in this pull request?
In a secure Yarn cluster, even though HBase or Kafka, or Hive services are not used in the user application, yarn client unnecessarily trying to generate  Delegations token from these services. This will add additional delays while submitting spark application in a yarn cluster

 Also during HBase delegation token generation step in the application submit stage,  HBaseDelegationTokenProvider prints a full Exception Stack trace and it causes a noisy warning.
 Apart from printing exception stack trace, Application submission taking more time as it retries connection to HBase master multiple times before it gives up. So, if HBase is not used in the user Applications, it is better to suggest User disable HBase Delegation Token generation.

 This PR aims to avoid printing full Exception Stack by just printing just Exception name and also add a suggestion message to disable `Delegation Token generation` if service is not used in the Spark Application.

 eg: `If HBase is not used, set spark.security.credentials.hbase.enabled to false`

### Why are the changes needed?

To avoid printing full Exception stack trace in WARN log
#### Before the fix
----------------
```
spark-shell --master yarn
.......
.......
21/06/12 14:29:41 WARN security.HBaseDelegationTokenProvider: Failed to get token from service hbase
java.lang.reflect.InvocationTargetException
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.spark.deploy.security.HBaseDelegationTokenProvider.obtainDelegationTokensWithHBaseConn(HBaseDelegationT
okenProvider.scala:93)
        at org.apache.spark.deploy.security.HBaseDelegationTokenProvider.obtainDelegationTokens(HBaseDelegationTokenProvider.
scala:60)
        at org.apache.spark.deploy.security.HadoopDelegationTokenManager$$anonfun$6.apply(HadoopDelegationTokenManager.scala:
166)
        at org.apache.spark.deploy.security.HadoopDelegationTokenManager$$anonfun$6.apply(HadoopDelegationTokenManager.scala:
164)
        at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
        at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
        at scala.collection.Iterator$class.foreach(Iterator.scala:891)
        at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
        at scala.collection.MapLike$DefaultValuesIterable.foreach(MapLike.scala:206)
        at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
        at scala.collection.AbstractTraversable.flatMap(Traversable.scala:104)
        at org.apache.spark.deploy.security.HadoopDelegationTokenManager.obtainDelegationTokens(HadoopDelegationTokenManager.
scala:164)
```

#### After  the fix
------------
```
 spark-shell --master yarn

Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
21/06/13 02:10:02 WARN security.HBaseDelegationTokenProvider: Failed to get token from service hbase due to  java.lang.reflect.InvocationTargetException Retrying to fetch HBase security token with hbase connection parameter.
21/06/13 02:10:40 WARN security.HBaseDelegationTokenProvider: Failed to get token from service hbase java.lang.reflect.InvocationTargetException. If HBase is not used, set spark.security.credentials.hbase.enabled to false
21/06/13 02:10:47 WARN cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: Attempted to request executors before the AM has registered!
```
### Does this PR introduce _any_ user-facing change?

Yes, in the log, it avoids printing full Exception stack trace.
Instread prints this.
**WARN security.HBaseDelegationTokenProvider: Failed to get token from service hbase java.lang.reflect.InvocationTargetException. If HBase is not used, set spark.security.credentials.hbase.enabled to false**

### How was this patch tested?

Tested manually as it can be verified only in a secure cluster

Closes #32894 from vinodkc/br_fix_Hbase_DT_Exception_stack_printing.

Authored-by: Vinod KC <vinod.kc.in@gmail.com>
Signed-off-by: Dongjoon Hyun <dhyun@apple.com>
2021-06-23 23:12:02 -07:00
Angerszhuuuu 490ae8f4d6 [SPARK-35777][SQL][TESTS] Check all year-month interval types in UDF
### What changes were proposed in this pull request?
Check all year-month interval types in UDF.

### Why are the changes needed?
New checks should improve test coverage.

### Does this PR introduce _any_ user-facing change?
Yes but `YearMonthIntervalType` has not been released yet.

### How was this patch tested?
Existed UT.

Closes #32985 from AngersZhuuuu/SPARK-35777.

Authored-by: Angerszhuuuu <angers.zhu@gmail.com>
Signed-off-by: Max Gekk <max.gekk@gmail.com>
2021-06-24 08:56:08 +03:00
itholic 712ed87faa [SPARK-35696][PYTHON][DOCS] Refine the code examples in pandas-on-Spark documentation
### What changes were proposed in this pull request?

This PR proposes to refine the code examples for pandas-on-Spark since some of them still follows the naming for Koalas.

For example,

```python
kdf = ks.DataFrame({"A": [1, 2, 3], "B": [4, 5, 6]})
```

should be refined to

```python
psdf = ps.DataFrame({"A": [1, 2, 3], "B": [4, 5, 6]})
```

Also fixed the several remaining Koalas stuffs in FAQ

### Why are the changes needed?

Because we don't want to use the name "Koalas" in the Apache Spark anymore.

### Does this PR introduce _any_ user-facing change?

Yes, the examples in the documentation will be changed with refined names.

### How was this patch tested?

Manually built the docs and check one by one.

Closes #33017 from itholic/SPARK-35696.

Authored-by: itholic <haejoon.lee@databricks.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
2021-06-24 14:48:13 +09:00
PengLei 61bd036cb9 [SPARK-35852][SQL] Use DateAdd instead of TimeAdd for DateType +/- INTERVAL DAY
### What changes were proposed in this pull request?
We use `DateAdd` to impl `DateType` `+`/`-`  `INTERVAL DAY`

### Why are the changes needed?
To improve the impl of `DateType` `+`/`-`  `INTERVAL DAY`
### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
Add ut test

Closes #33033 from Peng-Lei/SPARK-35852.

Authored-by: PengLei <18066542445@189.cn>
Signed-off-by: Max Gekk <max.gekk@gmail.com>
2021-06-24 08:47:29 +03:00
Dongjoon Hyun af9b47f8f8 [SPARK-35868][CORE] Add fs.s3a.downgrade.syncable.exceptions if not set
### What changes were proposed in this pull request?

This PR aims to add `fs.s3a.downgrade.syncable.exceptions=true` if it's not provided by the users.

### Why are the changes needed?

Currently, event log feature is broken with Hadoop 3.2 profile due to `UnsupportedOperationException` because [HADOOP-17597](https://issues.apache.org/jira/browse/HADOOP-17597) changes the default behavior to throw exceptions by default since Apache Hadoop 3.3.1. We know that it's because `EventLogFileWriters` is using `hadoopDataStream.foreach(_.hflush())`, but this PR aims to provide the same UX across Spark distributions with Hadoop2/Hadoop 3 at Apache Spark 3.2.0.
```
$ bin/spark-shell -c spark.eventLog.enabled=true -c spark.eventLog.dir=s3a://dongjoon/spark-events/
...
21/06/23 17:34:35 ERROR SparkContext: Error initializing SparkContext.
java.lang.UnsupportedOperationException: S3A streams are not Syncable. See HADOOP-17597.
```

### Does this PR introduce _any_ user-facing change?

Yes, this will recover the existing behavior.

### How was this patch tested?

Manual.
```
$ build/sbt package -Phadoop-3.2 -Phadoop-cloud
$ bin/spark-shell -c spark.eventLog.enabled=true -c spark.eventLog.dir=s3a://dongjoon/spark-events/
...(working)...
```

If the users provide the configuration explicitly, it will return to the original behavior throwing exceptions.
```
$ bin/spark-shell -c spark.eventLog.enabled=true -c spark.eventLog.dir=s3a://dongjoon/spark-events/ -c spark.hadoop.fs.s3a.downgrade.syncable.exceptions=false
...
21/06/23 17:44:41 ERROR Main: Failed to initialize Spark session.
java.lang.UnsupportedOperationException: S3A streams are not Syncable. See HADOOP-17597.
```

Closes #33044 from dongjoon-hyun/SPARK-35868.

Authored-by: Dongjoon Hyun <dhyun@apple.com>
Signed-off-by: Dongjoon Hyun <dhyun@apple.com>
2021-06-23 22:46:36 -07:00
Ruifeng Zheng 37f70422b5 [SPARK-35678][ML][FOLLOWUP] Revert changes in ANN
### What changes were proposed in this pull request?
revert changes related to ANN

### Why are the changes needed?
using the new `softmax` may cause flaky failure

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
reverted testsuite

Closes #33049 from zhengruifeng/revert_softmax_ann.

Authored-by: Ruifeng Zheng <ruifengz@foxmail.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
2021-06-24 14:02:28 +09:00
attilapiros 0bdece015e [SPARK-35543][CORE][FOLLOWUP] Fix memory leak in BlockManagerMasterEndpoint removeRdd
### What changes were proposed in this pull request?

Wrapping `JHashMap[BlockId, BlockStatus]` (used in `blockStatusByShuffleService`) into a new class `BlockStatusPerBlockId` which removes the reference to the map when all the persisted blocks are removed.

### Why are the changes needed?

With https://github.com/apache/spark/pull/32790 a bug is introduced when all the persisted blocks are removed we remove the HashMap which already shared by the block manger infos but when new block is persisted this map is needed to be used again for storing the data (and this HashMap must be the same which shared by the block manger infos created for registered block managers running on the same host where the external shuffle service is).

### Does this PR introduce _any_ user-facing change?

No.

### How was this patch tested?

Extending `BlockManagerInfoSuite` with test which removes all the persisted blocks then adds another one.

Closes #33020 from attilapiros/SPARK-35543-2.

Authored-by: attilapiros <piros.attila.zsolt@gmail.com>
Signed-off-by: Mridul Muralidharan <mridul<at>gmail.com>
2021-06-24 00:01:40 -05:00
yi.wu 1cdc56c70d [SPARK-35869][INFRA] Fix "Cannot run program python" error from do-release-docker.sh
### What changes were proposed in this pull request?

Add `python-is-python3` to `create-release/spark-rm/Dockerfile`

### Why are the changes needed?

Systems that use pthon3 by default should explicitly indicate the python version is 3.

### Does this PR introduce _any_ user-facing change?

No.

### How was this patch tested?

Tested during Apache 3.0.3 release.

Closes #33048 from Ngone51/fix-release-script.

Authored-by: yi.wu <yi.wu@databricks.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
2021-06-24 12:47:28 +09:00
tanel.kiis@gmail.com b3a2cebc2b [SPARK-34807][SQL] Transpose Window nodes with Project between them
### What changes were proposed in this pull request?

Extend the `TransposeWindow` rule to transpose `Window` nodes, that have `Project` between them.

### Why are the changes needed?

The analyzer will turn a `dataset.withColumn("colName", expressionWithWindowFunction)` method call to a `Project - Window - Project` chain in the logical plan. When this method is called multiple times in a row, then the projects can block the `Window` nodes from being transposed by the current `TransposeWindow` rule.

TPCDS q47 and q57 are also improved by this.

### Does this PR introduce _any_ user-facing change?

No

### How was this patch tested?

UT

Closes #31980 from tanelk/SPARK-34807_transpose_window.

Lead-authored-by: tanel.kiis@gmail.com <tanel.kiis@gmail.com>
Co-authored-by: Tanel Kiis <tanel.kiis@gmail.com>
Signed-off-by: Yuming Wang <yumwang@ebay.com>
2021-06-24 10:28:57 +08:00
Ruifeng Zheng a66738823c [SPARK-35678][ML][FOLLOWUP] softmax support offset and step
### What changes were proposed in this pull request?
softmax support offset and step, then we can use it in ANN and NB

### Why are the changes needed?
to simplify impl

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
existing testsuite

Closes #32991 from zhengruifeng/softmax_support_offset_step.

Authored-by: Ruifeng Zheng <ruifengz@foxmail.com>
Signed-off-by: Huaxin Gao <huaxin_gao@apple.com>
2021-06-23 21:03:18 -05:00
Hyukjin Kwon be9089731a [SPARK-35588][PYTHON][DOCS] Merge Binder integration and quickstart notebook for pandas API on Spark
### What changes were proposed in this pull request?

This PR proposes to fix:
- the Binder integration of pandas API on Spark, and merge them together with the existing PySpark one.
- update quickstart of pandas API on Spark, and make it working

The notebooks can be easily reviewed here:

https://mybinder.org/v2/gh/HyukjinKwon/spark/SPARK-35588-3?filepath=python%2Fdocs%2Fsource%2Fgetting_started%2Fquickstart_ps.ipynb

Original page in Koalas: https://koalas.readthedocs.io/en/latest/getting_started/10min.html

### Why are the changes needed?

- To show the working examples of quickstart to end users.
- To allow users to try out the examples without installation easily.

### Does this PR introduce _any_ user-facing change?

No to end users because the existing quickstart of pandas API on Spark is not released yet.

### How was this patch tested?

I manually tested it by uploading built Spark distribution to Binder. See 3bc15310a0

Closes #33041 from HyukjinKwon/SPARK-35588-2.

Authored-by: Hyukjin Kwon <gurwls223@apache.org>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
2021-06-24 10:17:22 +09:00
zengruios 1cf18a2277 [SPARK-35851][GRAPHX] Modify the wrong variable used in GraphGenerators.sampleLogNormal
### What changes were proposed in this pull request?
modify the wrong variable used in GraphGenerators.sampleLogNormal

### Why are the changes needed?
wrong variable used

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
UT

Closes #33010 from zengruios/SPARK-35851.

Authored-by: zengruios <578395184@qq.com>
Signed-off-by: Sean Owen <srowen@gmail.com>
2021-06-23 18:20:33 -05:00
Angerszhuuuu ad187227f1 [SPARK-35731][SQL][TESTS] Check all day-time interval types in arrow
### What changes were proposed in this pull request?
Check all day-time interval types in arrow.

### Why are the changes needed?
To improve test coverage.

### Does this PR introduce _any_ user-facing change?
No.

### How was this patch tested?
Added UT.

Closes #33039 from AngersZhuuuu/SPARK-35731.

Authored-by: Angerszhuuuu <angers.zhu@gmail.com>
Signed-off-by: Max Gekk <max.gekk@gmail.com>
2021-06-23 23:38:41 +03:00
Kousuke Saruta 2d3fa04e90 [SPARK-35729][SQL][TESTS] Check all day-time interval types in aggregate expressions
### What changes were proposed in this pull request?

This PR adds test to check `sum` and `avg` works with all the `DayTimeIntervalType`.
This PR also moves a dataframe commonly used by tests `SPARK-34837: Support ANSI SQL intervals by the aggregate function avg` and `SPARK-34716: Support ANSI SQL intervals by the aggregate function sum` to `SQLTestData.scala`, and a little bit modifies it.

### Why are the changes needed?

To ensure the results of aggregations are what is expected.

### Does this PR introduce _any_ user-facing change?

No.

### How was this patch tested?

New test.

Closes #33042 from sarutak/check-interval-agg-dt.

Authored-by: Kousuke Saruta <sarutak@oss.nttdata.com>
Signed-off-by: Max Gekk <max.gekk@gmail.com>
2021-06-23 23:34:28 +03:00
Jungtaek Lim (HeartSaVioR) 476197791b [SPARK-34889][SS] Introduce MergingSessionsIterator merging elements directly which belong to the same session
Introduction: this PR is a part of SPARK-10816 (`EventTime based sessionization (session window)`). Please refer #31937 to see the overall view of the code change. (Note that code diff could be diverged a bit.)

### What changes were proposed in this pull request?

This PR introduces MergingSessionsIterator, which enables to merge elements belong to the same session directly.

MergingSessionsIterator is a variant of SortAggregateIterator which merges the session windows based on the fact input rows are sorted by "group keys + the start time of session window". When merging windows, MergingSessionsIterator also applies aggregations on merged window, which eliminates the necessity on buffering inputs (which requires copying rows) and update the session spec for each input.

MergingSessionsIterator is quite performant compared to UpdatingSessionsIterator brought by SPARK-34888. Note that MergingSessionsIterator can only apply to the cases aggregation can be applied altogether, so there're still rooms for UpdatingSessionIterator to be used.

This issue also introduces MergingSessionsExec which is the physical node on leveraging MergingSessionsIterator to sort the input rows and aggregate rows according to the session windows.

### Why are the changes needed?

This part is a one of required on implementing SPARK-10816.

### Does this PR introduce _any_ user-facing change?

No.

### How was this patch tested?

New test suite added.

Closes #31987 from HeartSaVioR/SPARK-34889-SPARK-10816-PR-31570-part-2.

Lead-authored-by: Jungtaek Lim (HeartSaVioR) <kabhwan.opensource@gmail.com>
Co-authored-by: Jungtaek Lim <kabhwan.opensource@gmail.com>
Signed-off-by: Liang-Chi Hsieh <viirya@gmail.com>
2021-06-23 13:04:37 -07:00
Angerszhuuuu 077cf2acdb [SPARK-35733][SQL][TESTS] Check all day-time interval types in HiveInspectors tests
### What changes were proposed in this pull request?
Check all day-time interval types in HiveInspectors tests.

### Why are the changes needed?
New tests should improve test coverage for day-time interval types.

### Does this PR introduce _any_ user-facing change?
No.

### How was this patch tested?
Added UT.

Closes #33036 from AngersZhuuuu/SPARK-35733.

Authored-by: Angerszhuuuu <angers.zhu@gmail.com>
Signed-off-by: Max Gekk <max.gekk@gmail.com>
2021-06-23 19:20:51 +03:00
Bruce Robbins 66d5a0049a [SPARK-35817][SQL] Restore performance of queries against wide Avro tables
### What changes were proposed in this pull request?

When creating a record writer in an AvroDeserializer, or creating a struct converter in an AvroSerializer, look up Avro fields using a map rather than scanning the entire list of Avro fields.

### Why are the changes needed?

A query against an Avro table can be quite slow when all are true:

* There are many columns in the Avro file
* The query contains a wide projection
* There are many splits in the input
* Some of the splits are read serially (e.g., less executors than there are tasks)

A write to an Avro table can be quite slow when all are true:

* There are many columns in the new rows
* The operation is creating many files

For example, a single-threaded query against a 6000 column Avro data set with 50K rows and 20 files takes less than a minute with Spark 3.0.1 but over 7 minutes with Spark 3.2.0-SNAPSHOT.

This PR restores the faster time.

For the 1000 column read benchmark:
Before patch: 108447 ms
After patch: 35925 ms
percent improvement: 66%

For the 1000 column write benchmark:
Before patch: 123307
After patch: 42313
percent improvement: 65%

### Does this PR introduce _any_ user-facing change?

No

### How was this patch tested?

* Ran existing unit tests
* Added new unit tests
* Added new benchmarks

Closes #32969 from bersprockets/SPARK-35817.

Authored-by: Bruce Robbins <bersprockets@gmail.com>
Signed-off-by: Gengliang Wang <gengliang@apache.org>
2021-06-23 22:36:56 +08:00
yi.wu 7f937730ff [SPARK-33741][FOLLOW-UP][CORE] Rename the min threshold time speculation config
### What changes were proposed in this pull request?

This's a follow-up of https://github.com/apache/spark/pull/30710.
Rename the conf from `spark.speculation.min.threshold` to `spark.speculation.minTaskRuntime`.

### Why are the changes needed?

To follow the [config naming policy](https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/internal/config/ConfigEntry.scala#L21).

### Does this PR introduce _any_ user-facing change?

No (since Spark 3.2 hasn't been released).

### How was this patch tested?

Pass existing tests.

Closes #33037 from Ngone51/spark-33741-followup.

Authored-by: yi.wu <yi.wu@databricks.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
2021-06-23 13:29:58 +00:00
Yikun Jiang 4824c53398 [SPARK-35812][PYTHON] Throw ValueError if version and timestamp are used together in to_delta
### What changes were proposed in this pull request?

Throw ValueError if version and timestamp are used together in to_delta

### Why are the changes needed?
read_delta has arguments named `version` and `timestamp`, but they cannot be used together.

We should raise the proper error message when they are used together.

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
UT

Closes #33023 from Yikun/SPARK-35812.

Authored-by: Yikun Jiang <yikunkero@gmail.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
2021-06-23 19:04:45 +09:00