Commit graph

29021 commits

Author SHA1 Message Date
Michael Chirico 12b69cc27c [SPARK-26199][SPARK-31517][R] Fix strategy for handling ... names in mutate
### What changes were proposed in this pull request?

Change the strategy for how the varargs are handled in the default `mutate` method

### Why are the changes needed?

Bugfix -- `deparse` + `sapply` not working as intended due to `width.cutoff`

### Does this PR introduce any user-facing change?

Yes, bugfix. Shouldn't change any working code.

### How was this patch tested?

None! yet.

Closes #28386 from MichaelChirico/r-mutate-deparse.

Lead-authored-by: Michael Chirico <michael.chirico@grabtaxi.com>
Co-authored-by: Michael Chirico <michaelchirico4@gmail.com>
Signed-off-by: HyukjinKwon <gurwls223@apache.org>
2020-12-17 17:20:45 +09:00
Dongjoon Hyun ed09673fb9 [SPARK-33819][CORE] SingleFileEventLogFileReader/RollingEventLogFilesFileReader should be package private
### What changes were proposed in this pull request?

This PR aims to convert `EventLogFileReader`'s derived classes into `package private`.
- SingleFileEventLogFileReader
- RollingEventLogFilesFileReader

`EventLogFileReader` itself is used in `scheduler` module during tests.

### Why are the changes needed?

This classes were designed to be internal. This PR hides it explicitly to reduce the maintenance burden.

### Does this PR introduce _any_ user-facing change?

Yes, but these were exposed accidentally.

### How was this patch tested?

Pass CIs.

Closes #30814 from dongjoon-hyun/SPARK-33790.

Authored-by: Dongjoon Hyun <dongjoon@apache.org>
Signed-off-by: HyukjinKwon <gurwls223@apache.org>
2020-12-17 15:51:04 +09:00
Dongjoon Hyun b1950cc916 [SPARK-33821][BUILD] Upgrade SBT to 1.4.5
### What changes were proposed in this pull request?

This PR aims to upgrade SBT to 1.4.5 to support Apple Silicon.

### Why are the changes needed?

The following is the release note including `sbt 1.4.5 adds support for Apple silicon (AArch64 also called ARM64)`.
- https://github.com/sbt/sbt/releases/tag/v1.4.5

### Does this PR introduce _any_ user-facing change?

No.

### How was this patch tested?

Pass the CIs.

Closes #30817 from dongjoon-hyun/SPARK-33821.

Authored-by: Dongjoon Hyun <dongjoon@apache.org>
Signed-off-by: HyukjinKwon <gurwls223@apache.org>
2020-12-17 15:49:40 +09:00
allisonwang-db 1e85707738 [SPARK-33697][SQL] RemoveRedundantProjects should require column ordering by default
### What changes were proposed in this pull request?
This PR changes the rule `RemoveRedundantProjects` from by default passing column ordering requirements from parent nodes to always require column orders regardless of the requirements from parent nodes unless otherwise specified. More specifically, instead of excluding a few nodes like GenerateExec, UnionExec that are known to require children columns to be ordered, the rule now includes a whitelist of nodes that allow passing through the ordering requirements from their parents.

### Why are the changes needed?
Currently, this rule passes through ordering requirements from parents directly to children except for a few excluded nodes. This incorrectly removes the necessary project nodes below a UnionExec since it is not excluded. An earlier PR also fixed a similar issue for GenerateExec (SPARK-32861). In order to prevent similar issues, the rule should be changed to always require column ordering except for a few specific nodes that we know for sure can pass through the requirements.

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
Unit tests

Closes #30659 from allisonwang-db/spark-33697-remove-project-union.

Authored-by: allisonwang-db <66282705+allisonwang-db@users.noreply.github.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
2020-12-17 05:47:44 +00:00
Terry Kim 0c19497222 [SPARK-33815][SQL] Migrate ALTER TABLE ... SET [SERDE|SERDEPROPERTIES] to use UnresolvedTable to resolve the identifier
### What changes were proposed in this pull request?

This PR proposes to migrate `ALTER TABLE ... SET [SERDE|SERDEPROPERTIES` to use `UnresolvedTable` to resolve the table identifier. This allows consistent resolution rules (temp view first, etc.) to be applied for both v1/v2 commands. More info about the consistent resolution rule proposal can be found in [JIRA](https://issues.apache.org/jira/browse/SPARK-29900) or [proposal doc](https://docs.google.com/document/d/1hvLjGA8y_W_hhilpngXVub1Ebv8RsMap986nENCFnrg/edit?usp=sharing).

Note that `ALTER TABLE ... SET [SERDE|SERDEPROPERTIES]` is not supported for v2 tables.

### Why are the changes needed?

The PR makes the resolution consistent behavior consistent. For example,
```scala
sql("CREATE DATABASE test")
sql("CREATE TABLE spark_catalog.test.t (id bigint, val string) USING csv PARTITIONED BY (id)")
sql("CREATE TEMPORARY VIEW t AS SELECT 2")
sql("USE spark_catalog.test")
sql("ALTER TABLE t SET SERDE 'serdename'") // works fine
```
, but after this PR:
```
sql("ALTER TABLE t SET SERDE 'serdename'")
org.apache.spark.sql.AnalysisException: t is a temp view. 'ALTER TABLE ... SET [SERDE|SERDEPROPERTIES\' expects a table; line 1 pos 0
```
, which is the consistent behavior with other commands.

### Does this PR introduce _any_ user-facing change?

After this PR, `t` in the above example is resolved to a temp view first instead of `spark_catalog.test.t`.

### How was this patch tested?

Updated existing tests.

Closes #30813 from imback82/alter_table_serde_v2.

Authored-by: Terry Kim <yuminkim@gmail.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
2020-12-17 05:25:51 +00:00
sychen 0c12900120 [SPARK-33790][CORE] Reduce the rpc call of getFileStatus in SingleFileEventLogFileReader
### What changes were proposed in this pull request?
`FsHistoryProvider#checkForLogs` already has `FileStatus` when constructing `SingleFileEventLogFileReader`, and there is no need to get the `FileStatus` again when `SingleFileEventLogFileReader#fileSizeForLastIndex`.

### Why are the changes needed?
This can reduce a lot of rpc calls and improve the speed of the history server.

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
exist ut

Closes #30780 from cxzl25/SPARK-33790.

Authored-by: sychen <sychen@ctrip.com>
Signed-off-by: Jungtaek Lim <kabhwan.opensource@gmail.com>
2020-12-17 11:36:31 +09:00
yangjie01 477046c63f [SPARK-33775][BUILD] Suppress sbt compilation warnings in Scala 2.13
### What changes were proposed in this pull request?
There are too many compilation warnings in Scala 2.13, this pr add some `-Wconf:msg= regexes` rules to `SparkBuild.scala` to suppress compilation warnings and the suppressed will not be printed to the console.

The suppressed compilation warnings includes:

- All warnings related to `method\value\type\object\trait\inheritance` deprecated since 2.13

- All warnings related to `Widening conversion from XXX to YYY is deprecated because it loses precision`

- Auto-application to `()` is deprecated. Supply the empty argument list `()` explicitly to invoke method `methodName`, or remove the empty argument list from its definition (Java-defined methods are exempt).In Scala 3, an unapplied method like this will be eta-expanded into a function.

- method with a single empty parameter list overrides method without any parameter list

- method without a parameter list overrides a method with a single empty one

Not suppressed compilation warnings includes:

- Unicode escapes in triple quoted strings are deprecated, use the literal character instead.

- view bounds are deprecated

- symbol literal is deprecated

### Why are the changes needed?
Suppress unimportant compilation warnings in Scala 2.13

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
Pass the Jenkins or GitHub Action

Closes #30760 from LuciferYang/SPARK-33775.

Authored-by: yangjie01 <yangjie01@baidu.com>
Signed-off-by: Dongjoon Hyun <dongjoon@apache.org>
2020-12-16 15:06:25 -08:00
Terry Kim e7e29fd0af [SPARK-33514][SQL][FOLLOW-UP] Remove unused TruncateTableStatement case class
### What changes were proposed in this pull request?

This PR removes unused `TruncateTableStatement`: https://github.com/apache/spark/pull/30457#discussion_r544433820

### Why are the changes needed?

To remove unused `TruncateTableStatement` from #30457.

### Does this PR introduce _any_ user-facing change?

No

### How was this patch tested?

Not needed.

Closes #30811 from imback82/remove_truncate_table_stmt.

Authored-by: Terry Kim <yuminkim@gmail.com>
Signed-off-by: Dongjoon Hyun <dongjoon@apache.org>
2020-12-16 14:13:02 -08:00
Kent Yao 728a1298af [SPARK-33806][SQL] limit partition num to 1 when distributing by foldable expressions
### What changes were proposed in this pull request?

It seems a very popular way that people use DISTRIBUTE BY clause with a literal to coalesce partition in the pure SQL data processing.

For example
```
insert into table src select * from values (1), (2), (3) t(a) distribute by 1
```

Users may want the final output to be one single data file, but if the reality is not always true. Spark will always create a file for partition 0 whether it contains data or not, so when the data all goes to a partition(IDX >0), there will be always 2 files there and the part-00000 is empty. On the other hand, a lot of empty tasks will be launched too, this is unnecessary.

When users repeat the insert statement daily, hourly, or minutely, it causes small file issues.

```
spark-sql> set spark.sql.shuffle.partitions=3;drop table if exists test2;create table test2 using parquet as select * from values (1), (2), (3) t(a) distribute by 1;

 kentyaohulk  ~/spark   SPARK-33806  tree /Users/kentyao/Downloads/spark/spark-3.1.0-SNAPSHOT-bin-20201202/spark-warehouse/test2/ -s
/Users/kentyao/Downloads/spark/spark-3.1.0-SNAPSHOT-bin-20201202/spark-warehouse/test2/
├── [          0]  _SUCCESS
├── [        298]  part-00000-5dc19733-9405-414b-9681-d25c4d3e9ee6-c000.snappy.parquet
└── [        426]  part-00001-5dc19733-9405-414b-9681-d25c4d3e9ee6-c000.snappy.parquet
```

To avoid this, there are some options you can take.

1. use `distribute by null`, let the data go to the partition 0
2. set spark.sql.adaptive.enabled to true for Spark to automatically coalesce
3. using hints instead of `distribute by`
4. set spark.sql.shuffle.partitions to 1

In this PR, we set the partition number to 1 in this particular case.

### Why are the changes needed?

1. avoid small file issues
2. avoid unnecessary empty tasks when no adaptive execution

### Does this PR introduce _any_ user-facing change?

no

### How was this patch tested?

new test

Closes #30800 from yaooqinn/SPARK-33806.

Authored-by: Kent Yao <yaooqinn@hotmail.com>
Signed-off-by: Dongjoon Hyun <dongjoon@apache.org>
2020-12-16 14:09:28 -08:00
HyukjinKwon 3d0323401f [SPARK-33810][TESTS] Reenable test cases disabled in SPARK-31732
### What changes were proposed in this pull request?

The test failures were due to machine being slow in Jenkins. We switched to Ubuntu 20 if I am not wrong.
Looks like all machines are functioning properly unlike the past, and the tests pass without a problem anymore.

This PR proposes to enable them back.

### Why are the changes needed?

To restore test coverage.

### Does this PR introduce _any_ user-facing change?

No, dev-only.

### How was this patch tested?

Jenkins jobs in this PR show the flakiness.

Closes #30798 from HyukjinKwon/do-not-merge-test.

Authored-by: HyukjinKwon <gurwls223@apache.org>
Signed-off-by: Dongjoon Hyun <dongjoon@apache.org>
2020-12-16 08:34:22 -08:00
Terry Kim 8666d1c39c [SPARK-33800][SQL] Remove command name in AnalysisException message when a relation is not resolved
### What changes were proposed in this pull request?

Based on the discussion https://github.com/apache/spark/pull/30743#discussion_r543124594, this PR proposes to remove the command name in AnalysisException message when a relation is not resolved.

For some of the commands that use `UnresolvedTable`, `UnresolvedView`, and `UnresolvedTableOrView` to resolve an identifier, when the identifier cannot be resolved, the exception will be something like `Table or view not found for 'SHOW TBLPROPERTIES': badtable`. The command name (`SHOW TBLPROPERTIES` in this case) should be dropped to be consistent with other existing commands.

### Why are the changes needed?

To make the exception message consistent.

### Does this PR introduce _any_ user-facing change?

Yes, the exception message will be changed from
```
Table or view not found for 'SHOW TBLPROPERTIES': badtable
```
to
```
Table or view not found: badtable
```
for commands that use `UnresolvedTable`, `UnresolvedView`, and `UnresolvedTableOrView` to resolve an identifier.

### How was this patch tested?

Updated existing tests.

Closes #30794 from imback82/remove_cmd_from_exception_msg.

Authored-by: Terry Kim <yuminkim@gmail.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
2020-12-16 15:56:50 +00:00
HyukjinKwon ddda32b156 [SPARK-33802][INFRA][FOLLOW-UP] Separate arguments properly for -c option in git command for PySpark coverage
### What changes were proposed in this pull request?

This PR proposes to separate arguments properly for `-c` options. Otherwise, the space is considered as its part of argument:

```
Cloning into 'pyspark-coverage-site'...
unknown option: -c user.name='Apache Spark Test Account'
usage: git [--version] [--help] [-C <path>] [-c <name>=<value>]
           [--exec-path[=<path>]] [--html-path] [--man-path] [--info-path]
           [-p | --paginate | -P | --no-pager] [--no-replace-objects] [--bare]
           [--git-dir=<path>] [--work-tree=<path>] [--namespace=<name>]
           <command> [<args>]
[error] running git -c user.name='Apache Spark Test Account' -c user.email='sparktestaccgmail.com' commit -am Coverage report at latest commit in Apache Spark ; received return code 129
```

### Why are the changes needed?

To make the build pass (https://amplab.cs.berkeley.edu/jenkins/job/spark-master-test-sbt-hadoop-3.2/1728/console).

### Does this PR introduce _any_ user-facing change?

No, dev-only.

### How was this patch tested?

```python
>>> from sparktestsupport.shellutils import run_cmd
>>> run_cmd([
...             "git",
...             "-c",
...             "user.name='Apache Spark Test Account'",
...             "-c",
...             "user.email='sparktestaccgmail.com'",
...             "commit",
...             "-am",
...             "Coverage report at latest commit in Apache Spark"])
[SPARK-33802-followup 80d2565a511] Coverage report at latest commit in Apache Spark
 1 file changed, 1 insertion(+), 1 deletion(-)
CompletedProcess(args=['git', '-c', "user.name='Apache Spark Test Account'", '-c', "user.email='sparktestaccgmail.com'", 'commit', '-am', 'Coverage report at latest commit in Apache Spark'], returncode=0)
```

I cannot run e2e test because it requires the env to have Jenkins secret.

Closes #30804 from HyukjinKwon/SPARK-33802-followup.

Authored-by: HyukjinKwon <gurwls223@apache.org>
Signed-off-by: HyukjinKwon <gurwls223@apache.org>
2020-12-16 23:42:34 +09:00
Kent Yao 205d8e40bc [SPARK-32991][SQL] [FOLLOWUP] Reset command relies on session initials first
### What changes were proposed in this pull request?

As a follow-up of https://github.com/apache/spark/pull/30045, we modify the RESET command here to respect the session initial configs per session first then fall back to the `SharedState` conf, which makes each session could maintain a different copy of initial configs for resetting.

### Why are the changes needed?

to make reset command saner.
### Does this PR introduce _any_ user-facing change?

yes, RESET will respect session initials first not always go to the system defaults

### How was this patch tested?

add new tests

Closes #30642 from yaooqinn/SPARK-32991-F.

Authored-by: Kent Yao <yaooqinn@hotmail.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
2020-12-16 14:36:38 +00:00
Max Gekk 9d9d4a8e12 [SPARK-33789][SQL][TESTS] Refactor unified V1 and V2 datasource tests
### What changes were proposed in this pull request?
1. Move common utility functions such as `test()`, `withNsTable()` and `checkPartitions()` to `DDLCommandTestUtils`.
2. Place common settings such as `version`, `catalog`, `defaultUsing`, `sparkConf` to `CommandSuiteBase`.

### Why are the changes needed?
To improve code maintenance of the unified tests.

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
By running the affected test suites:
```
$ build/sbt -Phive-2.3 -Phive-thriftserver "test:testOnly *ShowPartitionsSuite"
$ build/sbt -Phive-2.3 -Phive-thriftserver "test:testOnly *ShowTablesSuite"
$ build/sbt -Phive-2.3 -Phive-thriftserver "test:testOnly *AlterTableAddPartitionSuite"
$ build/sbt -Phive-2.3 -Phive-thriftserver "test:testOnly *AlterTableDropPartitionSuite"
```

Closes #30779 from MaxGekk/refactor-unified-tests.

Lead-authored-by: Max Gekk <max.gekk@gmail.com>
Co-authored-by: Maxim Gekk <max.gekk@gmail.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
2020-12-16 13:49:49 +00:00
HyukjinKwon 7845865b8d [SPARK-33803][SQL] Sort table properties by key in DESCRIBE TABLE command
### What changes were proposed in this pull request?

This PR proposes to sort table properties in DESCRIBE TABLE command. This is consistent with DSv2 command as well:
e3058ba17c/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/DescribeTableExec.scala (L63)

This PR fixes the test case in Scala 2.13 build as well where the table properties have different order in the map.

### Why are the changes needed?

To keep the deterministic and pretty output, and fix the tests in Scala 2.13 build.
See https://amplab.cs.berkeley.edu/jenkins/job/spark-master-test-maven-hadoop-3.2-scala-2.13/49/testReport/junit/org.apache.spark.sql/SQLQueryTestSuite/describe_sql/

```
describe.sql&#010;Expected "...spark_catalog, view.[query.out.col.2=c, view.referredTempFunctionsNames=[], view.catalogAndNamespace.part.1=default]]", but got "...spark_catalog, view.[catalogAndNamespace.part.1=default, view.query.out.col.2=c, view.referredTempFunctionsNames=[]]]" Result did not match for query #29&#010;DESC FORMATTED v
```

### Does this PR introduce _any_ user-facing change?

Yes, it will change the text output from `DESCRIBE [EXTENDED|FORMATTED] table_name`.
Now the table properties are sorted by its key.

### How was this patch tested?

Related unittests were fixed accordingly.

Closes #30799 from HyukjinKwon/SPARK-33803.

Authored-by: HyukjinKwon <gurwls223@apache.org>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
2020-12-16 13:42:30 +00:00
HyukjinKwon 888a274a88 [SPARK-33802][INFRA] Override name and email address explicitly when updating PySpark coverage
### What changes were proposed in this pull request?

The current Jenkins job fails as below (https://amplab.cs.berkeley.edu/jenkins/job/spark-master-test-sbt-hadoop-3.2/1726/console)

```
Generating HTML files for PySpark coverage under /home/jenkins/workspace/spark-master-test-sbt-hadoop-3.2/python/test_coverage/htmlcov
/home/jenkins/workspace/spark-master-test-sbt-hadoop-3.2
Cloning into 'pyspark-coverage-site'...

*** Please tell me who you are.

Run

  git config --global user.email "youexample.com"
  git config --global user.name "Your Name"

to set your account's default identity.
Omit --global to set the identity only in this repository.
```

This PR proposes to set both when committing to the coverage site.

### Why are the changes needed?

To make the coverage site keep working.

### Does this PR introduce _any_ user-facing change?

No, dev-only.

### How was this patch tested?

Manually tested in the console but it has to be merged to test in the Jenkins environment.

Closes #30796 from HyukjinKwon/SPARK-33802.

Authored-by: HyukjinKwon <gurwls223@apache.org>
Signed-off-by: HyukjinKwon <gurwls223@apache.org>
2020-12-16 17:20:03 +09:00
HyukjinKwon 12f3715ed7 [MINOR][DOCS] Fix Jenkins job badge image and link in README.md
### What changes were proposed in this pull request?

This PR proposes to fix the Jenkins job badge:

Before:

![Screen Shot 2020-12-16 at 4 14 14 PM](https://user-images.githubusercontent.com/6477701/102316960-2c9ebe80-3fba-11eb-878d-07ae735fb3a6.png)

After:

![Screen Shot 2020-12-16 at 4 14 09 PM](https://user-images.githubusercontent.com/6477701/102316956-2a3c6480-3fba-11eb-9fa4-b8312edb8a1a.png)

### Why are the changes needed?

To make people can easily check the status of builds.

### Does this PR introduce _any_ user-facing change?

No, dev-only.

### How was this patch tested?

Manually tested via using GitHub.

Closes #30797 from HyukjinKwon/minor-readme.

Authored-by: HyukjinKwon <gurwls223@apache.org>
Signed-off-by: Dongjoon Hyun <dongjoon@apache.org>
2020-12-16 00:10:13 -08:00
Terry Kim ef7f6903b4 [SPARK-33786][SQL] The storage level for a cache should be respected when a table name is altered
### What changes were proposed in this pull request?

This PR proposes to retain the cache's storage level when a table name is altered by `ALTER TABLE ... RENAME TO ...`.

### Why are the changes needed?

Currently, when a table name is altered, the table's cache is refreshed (if exists), but the storage level is not retained. For example:
```scala
        def getStorageLevel(tableName: String): StorageLevel = {
          val table = spark.table(tableName)
          val cachedData = spark.sharedState.cacheManager.lookupCachedData(table).get
          cachedData.cachedRepresentation.cacheBuilder.storageLevel
        }

        Seq(1 -> "a").toDF("i", "j").write.parquet(path.getCanonicalPath)
        sql(s"CREATE TABLE old USING parquet LOCATION '${path.toURI}'")
        sql("CACHE TABLE old OPTIONS('storageLevel' 'MEMORY_ONLY')")
        val oldStorageLevel = getStorageLevel("old")

        sql("ALTER TABLE old RENAME TO new")
        val newStorageLevel = getStorageLevel("new")
```
`oldStorageLevel` will be `StorageLevel(memory, deserialized, 1 replicas)` whereas `newStorageLevel` will be `StorageLevel(disk, memory, deserialized, 1 replicas)`, which is the default storage level.

### Does this PR introduce _any_ user-facing change?

Yes, now the storage level for the cache will be retained.

### How was this patch tested?

Added a unit test.

Closes #30774 from imback82/alter_table_rename_cache_fix.

Authored-by: Terry Kim <yuminkim@gmail.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
2020-12-16 05:45:44 +00:00
Terry Kim 62be2483d7 [SPARK-33765][SQL] Migrate UNCACHE TABLE to use UnresolvedRelation to resolve identifier
### What changes were proposed in this pull request?

This PR proposes to migrate `UNCACHE TABLE` to use `UnresolvedRelation` to resolve the table/view identifier in Analyzer as discussed https://github.com/apache/spark/pull/30403/files#r532360022.

### Why are the changes needed?

To resolve the table/view in the analyzer.

### Does this PR introduce _any_ user-facing change?

No

### How was this patch tested?

Updated existing tests

Closes #30743 from imback82/uncache_v2.

Authored-by: Terry Kim <yuminkim@gmail.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
2020-12-16 05:37:56 +00:00
Sander Goos ddff94fd32 [SPARK-33793][TESTS] Introduce withExecutor to ensure proper cleanup in tests
### What changes were proposed in this pull request?
This PR introduces a helper method `withExecutor` that handles the creation of an Executor object and ensures that it is always stopped in a finally block. The tests in ExecutorSuite have been refactored to use this method.

### Why are the changes needed?
Recently an issue was discovered that leaked Executors (which are not explicitly stopped after a test) can cause other tests to fail due to the JVM being killed after 10 min. It is therefore crucial that tests always stop the Executor. By introducing this helper method, a simple pattern is established that can be easily adopted in new tests, which reduces the risk of regressions.

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
Run the ExecutorSuite locally.

Closes #30783 from sander-goos/SPARK-33793-close-executors.

Authored-by: Sander Goos <sander.goos@databricks.com>
Signed-off-by: HyukjinKwon <gurwls223@apache.org>
2020-12-16 11:26:54 +09:00
Gengliang Wang dd042f58e7 [SPARK-33796][DOCS] Show hidden text from the left menu of Spark Doc
### What changes were proposed in this pull request?

If the text in the left menu of Spark is too long, it will be hidden.
![sql1](https://user-images.githubusercontent.com/1097932/102249583-5ae7a580-3eb7-11eb-813c-f2e2fe019d28.jpeg)

This PR is to fix the style issue.

### Why are the changes needed?

Improve the UI of Spark documentation.

### Does this PR introduce _any_ user-facing change?

No
### How was this patch tested?

Manual test
After changes:
![sql2](https://user-images.githubusercontent.com/1097932/102249603-5fac5980-3eb7-11eb-806d-4e7b8248e6b6.jpeg)

Closes #30786 from gengliangwang/fixDocStyle.

Authored-by: Gengliang Wang <gengliang.wang@databricks.com>
Signed-off-by: HyukjinKwon <gurwls223@apache.org>
2020-12-16 10:07:35 +09:00
Max Gekk 3dfdcf4f92 [SPARK-33788][SQL] Throw NoSuchPartitionsException from HiveExternalCatalog.dropPartitions()
### What changes were proposed in this pull request?
Throw `NoSuchPartitionsException` from `ALTER TABLE .. DROP TABLE` for not existing partitions of a table in V1 Hive external catalog.

### Why are the changes needed?
The behaviour of Hive external catalog deviates from V1/V2 in-memory catalogs that throw `NoSuchPartitionsException`. To improve user experience with Spark SQL, it would be better to throw the same exception.

### Does this PR introduce _any_ user-facing change?
Yes, the command throws `NoSuchPartitionsException` instead of the general exception `AnalysisException`.

### How was this patch tested?
By running tests for `ALTER TABLE .. DROP PARTITION`:
```
$ build/sbt -Phive-2.3 -Phive-thriftserver "test:testOnly *AlterTableDropPartitionSuite"
```

Closes #30778 from MaxGekk/hive-drop-partition-exception.

Authored-by: Max Gekk <max.gekk@gmail.com>
Signed-off-by: HyukjinKwon <gurwls223@apache.org>
2020-12-16 10:03:48 +09:00
David McWhorter 87c58367cd [SPARK-22256][MESOS] Introduce spark.mesos.driver.memoryOverhead
### What changes were proposed in this pull request?
This is a simple change to support allocating a specified amount of overhead memory for the driver's mesos container.  This is already supported for executors.

### Why are the changes needed?
This is needed to keep the driver process from exceeding memory limits and being killed off when running on mesos.

### Does this PR introduce _any_ user-facing change?
Yes, it adds a `spark.mesos.driver.memoryOverhead` configuration option.  Documentation changes for this option are included in the PR.

### How was this patch tested?
Test cases covering allocation of driver memory overhead are included in the changes.

### Other notes
This is a second attempt to get this change reviewed, accepted and merged.  The original pull request was closed as stale back in January: https://github.com/apache/spark/pull/21006.
For this pull request, I took the original change by pmackles, rebased it onto the current master branch, and added a test case that was requested in the original code review.
I'm happy to make any further edits or do anything needed so that this can be included in a future spark release.  I keep having to build custom spark distributions so that we can use spark within our mesos clusters.

Closes #30739 from dmcwhorter/dmcwhorter-SPARK-22256.

Lead-authored-by: David McWhorter <david_mcwhorter@premierinc.com>
Co-authored-by: Paul Mackles <pmackles@adobe.com>
Signed-off-by: Dongjoon Hyun <dongjoon@apache.org>
2020-12-15 14:00:38 -08:00
Anton Okolnychyi 4d56d43838 [SPARK-33735][SQL] Handle UPDATE in ReplaceNullWithFalseInPredicate
### What changes were proposed in this pull request?

This PR adds `UpdateTable` to supported plans in `ReplaceNullWithFalseInPredicate`.

### Why are the changes needed?

This change allows Spark to optimize update conditions like we optimize filters.

### Does this PR introduce _any_ user-facing change?

No.

### How was this patch tested?

This PR extends the existing test cases to also cover `UpdateTable`.

Closes #30787 from aokolnychyi/spark-33735.

Authored-by: Anton Okolnychyi <aokolnychyi@apple.com>
Signed-off-by: Dongjoon Hyun <dongjoon@apache.org>
2020-12-15 13:50:58 -08:00
Wenchen Fan 40c37d69fd [SPARK-33617][SQL][FOLLOWUP] refine the default parallelism SQL config
### What changes were proposed in this pull request?

This is a followup of https://github.com/apache/spark/pull/30559 . The default parallelism config in Spark core is not good, as it's unclear where it applies. To not inherit this problem in Spark SQL, this PR refines the default parallelism SQL config, to make it clear that it only applies to leaf nodes.

### Why are the changes needed?

Make the config clearer.

### Does this PR introduce _any_ user-facing change?

It changes an unreleased config.

### How was this patch tested?

existing tests

Closes #30736 from cloud-fan/follow.

Authored-by: Wenchen Fan <wenchen@databricks.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
2020-12-15 14:16:43 +00:00
Prakhar Jain 23083aa594 [SPARK-33758][SQL] Prune unrequired partitionings from AliasAwareOutputPartitionings when some columns are dropped from projection
### What changes were proposed in this pull request?
This PR tries to prune the unrequired output partitionings in cases when the columns are dropped from Project/Aggregates etc.

### Why are the changes needed?
Consider this query:
    select t1.id from t1 JOIN t2 on t1.id = t2.id

This query will have top level Project node which will just project t1.id. But the outputPartitioning of this project node will be: PartitioningCollection(HashPartitioning(t1.id), HashPartitioning(t2.id)).

But since we are not propagating t2.id column, so we can drop HashPartitioning(t2.id) from the output partitioning of Project node.

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
Added UTs.

Closes #30762 from prakharjain09/SPARK-33758-prune-partitioning.

Authored-by: Prakhar Jain <prakharjain09@gmail.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
2020-12-15 13:46:58 +00:00
gengjiaan 58cb2bae74 [SPARK-33752][SQL] Avoid the getSimpleMessage of AnalysisException adds semicolon repeatedly
### What changes were proposed in this pull request?
The current `getSimpleMessage` of `AnalysisException` may adds semicolon repeatedly. There show an example below:
`select decode()`

The output will be:
```
org.apache.spark.sql.AnalysisException
Invalid number of arguments for function decode. Expected: 2; Found: 0;; line 1 pos 7
```

### Why are the changes needed?
Fix a bug, because it adds semicolon repeatedly.

### Does this PR introduce _any_ user-facing change?
Yes. the message of AnalysisException will be correct.

### How was this patch tested?
Jenkins test.

Closes #30724 from beliefer/SPARK-33752.

Lead-authored-by: gengjiaan <gengjiaan@360.cn>
Co-authored-by: beliefer <beliefer@163.com>
Signed-off-by: HyukjinKwon <gurwls223@apache.org>
2020-12-15 19:20:01 +09:00
Chongguang LIU 20f6d63bc1 [SPARK-33769][SQL] Improve the next-day function of the sql component to deal with Column type
### What changes were proposed in this pull request?

The proposition of this pull request is described in this JIRA ticket: [https://issues.apache.org/jira/browse/SPARK-33769](url)

It proposes to improve the next-day function of the sql component to deal with Column type for the parameter dayOfWeek.

### Why are the changes needed?

It makes this functionality easier to use.
Actually the signature of this function is:
> def next_day(date: Column, dayOfWeek: String): Column.

It accepts the dayOfWeek parameter as a String. However in some cases, the dayOfWeek is in a Column, so a different value for each row of the dataframe.
A current workaround is to use the NextDay function like this:
> NextDay(dateCol.expr, dayOfWeekCol.expr).

The proposition is to add another signature for this function:
> def next_day(date: Column, dayOfWeek: Column): Column

In fact it is already the case for some other functions in this scala object, exemple:
> def date_sub(start: Column, days: Int): Column = date_sub(start, lit(days))
> def date_sub(start: Column, days: Column): Column = withExpr \{ DateSub(start.expr, days.expr) }

or

> def add_months(startDate: Column, numMonths: Int): Column = add_months(startDate, lit(numMonths))
> def add_months(startDate: Column, numMonths: Column): Column = withExpr {
>  AddMonths(startDate.expr, numMonths.expr)
>  }

This pull request is the same idea for the function next_day.

### Does this PR introduce _any_ user-facing change?

Yes
With this pull request, users of spark will have a new signature of the function:
> def next_day(date: Column, dayOfWeek: Column): Column

But the existing function signature should still work:
> def next_day(date: Column, dayOfWeek: String): Column

So this change should be retrocompatible.

### How was this patch tested?

The unit tests of the next_day function has been enhanced.
It tests the dayOfWeek parameter both as String and Column.
I also added a test case for the existing signature where the dayOfWeek is a non valid String. This should return null.

Closes #30761 from chongguang/SPARK-33769.

Authored-by: Chongguang LIU <chongguang.liu@laposte.fr>
Signed-off-by: HyukjinKwon <gurwls223@apache.org>
2020-12-15 18:55:48 +09:00
Wenchen Fan 03042529e3 [SPARK-33273][SQL] Fix a race condition in subquery execution
### What changes were proposed in this pull request?

If we call `SubqueryExec.executeTake`, it will call `SubqueryExec.execute` which will trigger the codegen of the query plan and create an RDD. However, `SubqueryExec` already has a thread (`SubqueryExec.relationFuture`) to execute the query plan, which means we have 2 threads triggering codegen of the same query plan at the same time.

Spark codegen is not thread-safe, as we have places like `HashAggregateExec.bufferVars` that is a shared variable. The bug in `SubqueryExec` may lead to correctness bugs.

Since https://issues.apache.org/jira/browse/SPARK-33119, `ScalarSubquery` will call `SubqueryExec.executeTake`, so flaky tests start to appear.

This PR fixes the bug by reimplementing https://github.com/apache/spark/pull/30016 . We should pass the number of rows we want to collect to `SubqueryExec` at planning time, so that we can use `executeTake` inside `SubqueryExec.relationFuture`, and the caller side should always call `SubqueryExec.executeCollect`. This PR also adds checks so that we can make sure only `SubqueryExec.executeCollect` is called.

### Why are the changes needed?

fix correctness bug.

### Does this PR introduce _any_ user-facing change?

no

### How was this patch tested?

run `build/sbt "sql/testOnly *SQLQueryTestSuite  -- -z scalar-subquery-select"` more than 10 times. Previously it fails, now it passes.

Closes #30765 from cloud-fan/bug.

Authored-by: Wenchen Fan <wenchen@databricks.com>
Signed-off-by: HyukjinKwon <gurwls223@apache.org>
2020-12-15 18:29:28 +09:00
Max Gekk 141e26d65b [SPARK-33767][SQL][TESTS] Unify v1 and v2 ALTER TABLE .. DROP PARTITION tests
### What changes were proposed in this pull request?
1. Move the `ALTER TABLE .. DROP PARTITION` parsing tests to `AlterTableDropPartitionParserSuite`
2. Place v1 tests for `ALTER TABLE .. DROP PARTITION` from `DDLSuite` and v2 tests from `AlterTablePartitionV2SQLSuite` to the common trait `AlterTableDropPartitionSuiteBase`, so, the tests will run for V1, Hive V1 and V2 DS.

### Why are the changes needed?
- The unification will allow to run common `ALTER TABLE .. DROP PARTITION` tests for both DSv1 and Hive DSv1, DSv2
- We can detect missing features and differences between DSv1 and DSv2 implementations.

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
By running new test suites:
```
$ build/sbt -Phive -Phive-thriftserver "test:testOnly *AlterTableDropPartitionParserSuite"
$ build/sbt -Phive -Phive-thriftserver "test:testOnly *AlterTableDropPartitionSuite"
```

Closes #30747 from MaxGekk/unify-alter-table-drop-partition-tests.

Authored-by: Max Gekk <max.gekk@gmail.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
2020-12-15 05:36:57 +00:00
Terry Kim 366beda54a [SPARK-33785][SQL] Migrate ALTER TABLE ... RECOVER PARTITIONS to use UnresolvedTable to resolve the identifier
### What changes were proposed in this pull request?

This PR proposes to migrate `ALTER TABLE ... RECOVER PARTITIONS` to use `UnresolvedTable` to resolve the table identifier. This allows consistent resolution rules (temp view first, etc.) to be applied for both v1/v2 commands. More info about the consistent resolution rule proposal can be found in [JIRA](https://issues.apache.org/jira/browse/SPARK-29900) or [proposal doc](https://docs.google.com/document/d/1hvLjGA8y_W_hhilpngXVub1Ebv8RsMap986nENCFnrg/edit?usp=sharing).

Note that `ALTER TABLE ... RECOVER PARTITIONS` is not supported for v2 tables.

### Why are the changes needed?

The PR makes the resolution consistent behavior consistent. For example,
```scala
sql("CREATE DATABASE test")
sql("CREATE TABLE spark_catalog.test.t (id bigint, val string) USING csv PARTITIONED BY (id)")
sql("CREATE TEMPORARY VIEW t AS SELECT 2")
sql("USE spark_catalog.test")
sql("ALTER TABLE t RECOVER PARTITIONS") // works fine
```
, but after this PR:
```
sql("ALTER TABLE t RECOVER PARTITIONS")
org.apache.spark.sql.AnalysisException: t is a temp view. 'ALTER TABLE ... RECOVER PARTITIONS' expects a table; line 1 pos 0
```
, which is the consistent behavior with other commands.

### Does this PR introduce _any_ user-facing change?

After this PR, `ALTER TABLE t RECOVER PARTITIONS` in the above example is resolved to a temp view `t` first instead of `spark_catalog.test.t`.

### How was this patch tested?

Updated existing tests.

Closes #30773 from imback82/alter_table_recover_part_v2.

Authored-by: Terry Kim <yuminkim@gmail.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
2020-12-15 05:23:39 +00:00
HyukjinKwon a99a47ca1d [SPARK-33748][K8S] Respect environment variables and configurations for Python executables
### What changes were proposed in this pull request?

This PR proposes:

- Respect `PYSPARK_PYTHON` and `PYSPARK_DRIVER_PYTHON` environment variables, or `spark.pyspark.python` and `spark.pyspark.driver.python` configurations in Kubernates just like other cluster types in Spark.

- Depreate `spark.kubernetes.pyspark.pythonVersion` and guide users to set the environment variables and configurations for Python executables.
    NOTE that `spark.kubernetes.pyspark.pythonVersion` is already a no-op configuration without this PR. Default is `3` and other values are disallowed.

- In order for Python executable settings to be consistently used, fix `spark.archives` option to unpack into the current working directory in the driver of Kubernates' cluster mode. This behaviour is identical with Yarn's cluster mode. By doing this, users can leverage Conda or virtuenenv in cluster mode as below:

   ```python
    conda create -y -n pyspark_conda_env -c conda-forge pyarrow pandas conda-pack
    conda activate pyspark_conda_env
    conda pack -f -o pyspark_conda_env.tar.gz
    PYSPARK_PYTHON=./environment/bin/python spark-submit --archives pyspark_conda_env.tar.gz#environment app.py
   ```

- Removed several unused or useless codes such as `extractS3Key` and `renameResourcesToLocalFS`

### Why are the changes needed?

- To provide a consistent support of PySpark by using `PYSPARK_PYTHON` and `PYSPARK_DRIVER_PYTHON` environment variables, or `spark.pyspark.python` and `spark.pyspark.driver.python` configurations.
- To provide Conda and virtualenv support via `spark.archives` options.

### Does this PR introduce _any_ user-facing change?

Yes:

- `spark.kubernetes.pyspark.pythonVersion` is deprecated.
- `PYSPARK_PYTHON` and `PYSPARK_DRIVER_PYTHON` environment variables, and `spark.pyspark.python` and `spark.pyspark.driver.python` configurations are respected.

### How was this patch tested?

Manually tested via:

```bash
minikube delete
minikube start --cpus 12 --memory 16384
kubectl create namespace spark-integration-test
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
  name: spark
  namespace: spark-integration-test
EOF
kubectl create clusterrolebinding spark-role --clusterrole=edit --serviceaccount=spark-integration-test:spark --namespace=spark-integration-test
dev/make-distribution.sh --pip --tgz -Pkubernetes
resource-managers/kubernetes/integration-tests/dev/dev-run-integration-tests.sh --spark-tgz `pwd`/spark-3.2.0-SNAPSHOT-bin-3.2.0.tgz  --service-account spark --namespace spark-integration-test
```

Unittests were also added.

Closes #30735 from HyukjinKwon/SPARK-33748.

Authored-by: HyukjinKwon <gurwls223@apache.org>
Signed-off-by: HyukjinKwon <gurwls223@apache.org>
2020-12-15 08:56:45 +09:00
Chao Sun 49d3256497
[SPARK-33653][SQL] DSv2: REFRESH TABLE should recache the table itself
### What changes were proposed in this pull request?

This changes DSv2 refresh table semantics to also recache the target table itself.

### Why are the changes needed?

Currently "REFRESH TABLE" in DSv2 only invalidate all caches referencing the table. With #30403 merged which adds support for caching a DSv2 table, we should also recache the target table itself to make the behavior consistent with DSv1.

### Does this PR introduce _any_ user-facing change?

Yes, now refreshing table in DSv2 also recache the target table itself.
### How was this patch tested?

Added coverage of this new behavior in the existing UT for v2 refresh table command

Closes #30742 from sunchao/SPARK-33653.

Authored-by: Chao Sun <sunchao@apple.com>
Signed-off-by: Dongjoon Hyun <dongjoon@apache.org>
2020-12-14 15:18:50 -08:00
Max Gekk f156718587
[SPARK-33777][SQL] Sort output of V2 SHOW PARTITIONS
### What changes were proposed in this pull request?
List partitions returned by the V2 `SHOW PARTITIONS` command in alphabetical order.

### Why are the changes needed?
To have the same behavior as:
1. V1 in-memory catalog, see a28ed86a38/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/InMemoryCatalog.scala (L546)
2. V1 Hive catalogs, see fab2995972/sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveClientImpl.scala (L715)

### Does this PR introduce _any_ user-facing change?
Yes, after the changes, V2 SHOW PARTITIONS sorts its output.

### How was this patch tested?
Added new UT to the base trait `ShowPartitionsSuiteBase` which contains tests for V1 and V2.

Closes #30764 from MaxGekk/sort-show-partitions.

Authored-by: Max Gekk <max.gekk@gmail.com>
Signed-off-by: Dongjoon Hyun <dongjoon@apache.org>
2020-12-14 14:28:47 -08:00
Yuming Wang 412d86e711
[SPARK-33771][SQL][TESTS] Fix Invalid value for HourOfAmPm when testing on JDK 14
### What changes were proposed in this pull request?

This pr fix invalid value for HourOfAmPm when testing on JDK 14.

### Why are the changes needed?

Run test on JDK 14.

### Does this PR introduce _any_ user-facing change?

No.

### How was this patch tested?

N/A

Closes #30754 from wangyum/SPARK-33771.

Authored-by: Yuming Wang <yumwang@ebay.com>
Signed-off-by: Dongjoon Hyun <dongjoon@apache.org>
2020-12-14 13:34:23 -08:00
Holden Karau 5885cc15ca [SPARK-33261][K8S] Add a developer API for custom feature steps
### What changes were proposed in this pull request?

Add a developer API for custom driver & executor feature steps.

### Why are the changes needed?

While we allow templates for the basis of pod creation, some deployments need more flexibility in how the pods are configured. This adds a developer API for custom deployments.

### Does this PR introduce _any_ user-facing change?

New developer API.

### How was this patch tested?

Extended tests to verify custom step is applied when configured.

Closes #30206 from holdenk/SPARK-33261-allow-people-to-extend-pod-feature-steps.

Authored-by: Holden Karau <hkarau@apple.com>
Signed-off-by: Holden Karau <hkarau@apple.com>
2020-12-14 12:05:28 -08:00
Anton Okolnychyi bb60fb1bbd
[SPARK-33779][SQL][FOLLOW-UP] Fix Java Linter error
### What changes were proposed in this pull request?

This PR removes unused imports.

### Why are the changes needed?

These changes are required to fix the build.

### Does this PR introduce _any_ user-facing change?

No.

### How was this patch tested?

Via `dev/lint-java`.

Closes #30767 from aokolnychyi/fix-linter.

Authored-by: Anton Okolnychyi <aokolnychyi@apple.com>
Signed-off-by: Dongjoon Hyun <dongjoon@apache.org>
2020-12-14 11:39:42 -08:00
Anton Okolnychyi 82aca7eb8f [SPARK-33779][SQL] DataSource V2: API to request distribution and ordering on write
### What changes were proposed in this pull request?

This PR adds connector interfaces proposed in the [design doc](https://docs.google.com/document/d/1X0NsQSryvNmXBY9kcvfINeYyKC-AahZarUqg3nS1GQs/edit#) for SPARK-23889.

**Note**: This PR contains a subset of changes discussed in PR #29066.

### Why are the changes needed?

Data sources should be able to request a specific distribution and ordering of data on write. In particular, these scenarios are considered useful:
- global sort
- cluster data and sort within partitions
- local sort within partitions
- no sort

Please see the design doc above for a more detailed explanation of requirements.

### Does this PR introduce _any_ user-facing change?

This PR introduces public changes to the DS V2 by adding a logical write abstraction as we have on the read path as well as additional interfaces to represent distribution and ordering of data (please see the doc for more info).

The existing `Distribution` interface in `read` package is read-specific and not flexible enough like discussed in the design doc. The current proposal is to evolve these interfaces separately until they converge.

### How was this patch tested?

This patch adds only interfaces.

Closes #30706 from aokolnychyi/spark-23889-interfaces.

Authored-by: Anton Okolnychyi <aokolnychyi@apple.com>
Signed-off-by: Ryan Blue <blue@apache.org>
2020-12-14 10:54:18 -08:00
ulysses-you 839d6899ad [SPARK-33733][SQL] PullOutNondeterministic should check and collect deterministic field
### What changes were proposed in this pull request?

The deterministic field is wider than `NonDerterministic`, we should keep same range between pull out and check analysis.

### Why are the changes needed?

For example
```
select * from values(1), (4) as t(c1) order by java_method('java.lang.Math', 'abs', c1)
```

We will get exception since `java_method` deterministic field is false but not a `NonDeterministic`
```
Exception in thread "main" org.apache.spark.sql.AnalysisException: nondeterministic expressions are only allowed in
Project, Filter, Aggregate or Window, found:
 java_method('java.lang.Math', 'abs', t.`c1`) ASC NULLS FIRST
in operator Sort [java_method(java.lang.Math, abs, c1#1) ASC NULLS FIRST], true
               ;;
```

### Does this PR introduce _any_ user-facing change?

Yes.

### How was this patch tested?

Add test.

Closes #30703 from ulysses-you/SPARK-33733.

Authored-by: ulysses-you <ulyssesyou18@gmail.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
2020-12-14 14:35:24 +00:00
angerszhu 5f9a7fea06 [SPARK-33428][SQL] Conv UDF use BigInt to avoid Long value overflow
### What changes were proposed in this pull request?
Use Long value store  encode value will overflow and return unexpected result, use BigInt to replace Long value and make logical more simple.

### Why are the changes needed?
Fix value  overflow issue

### Does this PR introduce _any_ user-facing change?
People can sue `conf` function to convert value big then LONG.MAX_VALUE

### How was this patch tested?
Added UT

#### BenchMark
```
/*
 * Licensed to the Apache Software Foundation (ASF) under one or more
 * contributor license agreements.  See the NOTICE file distributed with
 * this work for additional information regarding copyright ownership.
 * The ASF licenses this file to You under the Apache License, Version 2.0
 * (the "License"); you may not use this file except in compliance with
 * the License.  You may obtain a copy of the License at
 *
 *    http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */

package org.apache.spark.sql.execution.benchmark

import scala.util.Random

import org.apache.spark.benchmark.Benchmark
import org.apache.spark.sql.functions._
object ConvFuncBenchMark extends SqlBasedBenchmark {

  val charset =
    Array[String]("0", "1", "2", "3", "4", "5", "6", "7", "8", "9",
      "A", "B", "C", "D", "E", "F", "G",
      "H", "I", "J", "K", "L", "M", "N",
      "O", "P", "Q", "R", "S", "T",
      "U", "V", "W", "X", "Y", "Z")

  def constructString(from: Int, length: Int): String = {
    val chars = charset.slice(0, from)
    (0 to length).map(x => {
      val v = Random.nextInt(from)
      chars(v)
    }).mkString("")
  }

  private def doBenchmark(cardinality: Long, length: Int, from: Int, toBase: Int): Unit = {
    spark.range(cardinality)
      .withColumn("str", lit(constructString(from, length)))
      .select(conv(col("str"), from, toBase))
      .noop()
  }

  /**
   * Main process of the whole benchmark.
   * Implementations of this method are supposed to use the wrapper method `runBenchmark`
   * for each benchmark scenario.
   */
  override def runBenchmarkSuite(mainArgs: Array[String]): Unit = {
    val N = 1000000L
    val benchmark = new Benchmark("conv", N, output = output)
    benchmark.addCase("length 10 from 2 to 16") { _ =>
      doBenchmark(N, 10, 2, 16)
    }

    benchmark.addCase("length 10 from 2 to 10") { _ =>
      doBenchmark(N, 10, 2, 10)
    }

    benchmark.addCase("length 10 from 10 to 16") { _ =>
      doBenchmark(N, 10, 10, 16)
    }

    benchmark.addCase("length 10 from 10 to 36") { _ =>
      doBenchmark(N, 10, 10, 36)
    }

    benchmark.addCase("length 10 from 16 to 10") { _ =>
      doBenchmark(N, 10, 10, 10)
    }

    benchmark.addCase("length 10 from 16 to 36") { _ =>
      doBenchmark(N, 10, 16, 36)
    }

    benchmark.addCase("length 10 from 36 to 10") { _ =>
      doBenchmark(N, 10, 36, 10)
    }

    benchmark.addCase("length 10 from 36 to 16") { _ =>
      doBenchmark(N, 10, 36, 16)
    }

    //
    benchmark.addCase("length 20 from 10 to 16") { _ =>
      doBenchmark(N, 20, 10, 16)
    }

    benchmark.addCase("length 20 from 10 to 36") { _ =>
      doBenchmark(N, 20, 10, 36)
    }

    benchmark.addCase("length 30 from 10 to 16") { _ =>
      doBenchmark(N, 30, 10, 16)
    }

    benchmark.addCase("length 30 from 10 to 36") { _ =>
      doBenchmark(N, 30, 10, 36)
    }

    //
    benchmark.addCase("length 20 from 16 to 10") { _ =>
      doBenchmark(N, 20, 16, 10)
    }

    benchmark.addCase("length 20 from 16 to 36") { _ =>
      doBenchmark(N, 20, 16, 36)
    }

    benchmark.addCase("length 30 from 16 to 10") { _ =>
      doBenchmark(N, 30, 16, 10)
    }

    benchmark.addCase("length 30 from 16 to 36") { _ =>
      doBenchmark(N, 30, 16, 36)
    }

    benchmark.run()
  }

}
```

Result with patch :
```
Java HotSpot(TM) 64-Bit Server VM 1.8.0_191-b12 on Mac OS X 10.14.6
Intel(R) Core(TM) i5-8259U CPU  2.30GHz
conv:                                     Best Time(ms)   Avg Time(ms)   Stdev(ms)    Rate(M/s)   Per Row(ns)   Relative
------------------------------------------------------------------------------------------------------------------------
length 10 from 2 to 16                               54             73          18         18.7          53.6       1.0X
length 10 from 2 to 10                               43             47           5         23.5          42.5       1.3X
length 10 from 10 to 16                              39             47          12         25.5          39.2       1.4X
length 10 from 10 to 36                              38             42           3         26.5          37.7       1.4X
length 10 from 16 to 10                              39             41           3         25.7          38.9       1.4X
length 10 from 16 to 36                              36             41           4         27.6          36.3       1.5X
length 10 from 36 to 10                              38             40           2         26.3          38.0       1.4X
length 10 from 36 to 16                              37             39           2         26.8          37.2       1.4X
length 20 from 10 to 16                              36             39           2         27.4          36.5       1.5X
length 20 from 10 to 36                              37             39           2         27.2          36.8       1.5X
length 30 from 10 to 16                              37             39           2         27.0          37.0       1.4X
length 30 from 10 to 36                              36             38           2         27.5          36.3       1.5X
length 20 from 16 to 10                              35             38           2         28.3          35.4       1.5X
length 20 from 16 to 36                              34             38           3         29.2          34.3       1.6X
length 30 from 16 to 10                              38             40           2         26.3          38.1       1.4X
length 30 from 16 to 36                              37             38           1         27.2          36.8       1.5X
```
Result without patch:
```
Java HotSpot(TM) 64-Bit Server VM 1.8.0_191-b12 on Mac OS X 10.14.6
Intel(R) Core(TM) i5-8259U CPU  2.30GHz
conv:                                     Best Time(ms)   Avg Time(ms)   Stdev(ms)    Rate(M/s)   Per Row(ns)   Relative
------------------------------------------------------------------------------------------------------------------------
length 10 from 2 to 16                               66            101          29         15.1          66.1       1.0X
length 10 from 2 to 10                               50             55           5         20.2          49.5       1.3X
length 10 from 10 to 16                              46             51           5         21.8          45.9       1.4X
length 10 from 10 to 36                              43             48           4         23.4          42.7       1.5X
length 10 from 16 to 10                              44             47           4         22.9          43.7       1.5X
length 10 from 16 to 36                              40             44           2         24.7          40.5       1.6X
length 10 from 36 to 10                              40             44           4         25.0          40.1       1.6X
length 10 from 36 to 16                              41             43           2         24.3          41.2       1.6X
length 20 from 10 to 16                              39             41           2         25.7          38.9       1.7X
length 20 from 10 to 36                              40             42           2         24.9          40.2       1.6X
length 30 from 10 to 16                              39             40           1         25.9          38.6       1.7X
length 30 from 10 to 36                              40             41           1         25.0          40.0       1.7X
length 20 from 16 to 10                              40             41           1         25.1          39.8       1.7X
length 20 from 16 to 36                              40             42           2         25.2          39.7       1.7X
length 30 from 16 to 10                              39             42           2         25.6          39.0       1.7X
length 30 from 16 to 36                              39             40           2         25.7          38.8       1.7X
```

Closes #30350 from AngersZhuuuu/SPARK-33428.

Authored-by: angerszhu <angers.zhu@gmail.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
2020-12-14 14:32:08 +00:00
Holden Karau bf2c88ccae
[SPARK-33716][K8S] Fix potential race condition during pod termination
### What changes were proposed in this pull request?

Check that the pod state is not pending or running even if there is a deletion timestamp.

### Why are the changes needed?

This can occur when the pod state and deletion timestamp are not updated by etcd in sync & we get a pod snapshot during an inconsistent view.

### Does this PR introduce _any_ user-facing change?

No

### How was this patch tested?

Manual testing with local version of Minikube on an overloaded computer that caused out of sync updates.

Closes #30693 from holdenk/SPARK-33716-decommissioning-race-condition-during-pod-snapshot.

Authored-by: Holden Karau <hkarau@apple.com>
Signed-off-by: Dongjoon Hyun <dongjoon@apache.org>
2020-12-14 02:09:59 -08:00
yangjie01 cd0356df9e [SPARK-33673][SQL] Avoid push down partition filters to ParquetScan for DataSourceV2
### What changes were proposed in this pull request?
As described in SPARK-33673, some test suites in `ParquetV2SchemaPruningSuite` will failed when set `parquet.version` to 1.11.1 because Parquet will return empty results for non-existent column since PARQUET-1765.

This pr change to use `readDataSchema()` instead of `schema` to build `pushedParquetFilters` in `ParquetScanBuilder` to avoid push down partition filters to `ParquetScan` for `DataSourceV2`

### Why are the changes needed?
Prepare for upgrade using Parquet 1.11.1.

### Does this PR introduce _any_ user-facing change?
No.

### How was this patch tested?

- Pass the Jenkins or GitHub Action

- Manual test as follows:

```
mvn -Dtest=none -DwildcardSuites=org.apache.spark.sql.execution.datasources.parquet.ParquetV2SchemaPruningSuite -Dparquet.version=1.11.1 test -pl sql/core -am
```

**Before**

```
Run completed in 3 minutes, 13 seconds.
Total number of tests run: 134
Suites: completed 2, aborted 0
Tests: succeeded 120, failed 14, canceled 0, ignored 0, pending 0
*** 14 TESTS FAILED ***
```

**After**

```
Run completed in 3 minutes, 46 seconds.
Total number of tests run: 134
Suites: completed 2, aborted 0
Tests: succeeded 134, failed 0, canceled 0, ignored 0, pending 0
All tests passed.
```

Closes #30652 from LuciferYang/SPARK-33673.

Authored-by: yangjie01 <yangjie01@baidu.com>
Signed-off-by: Yuming Wang <yumwang@ebay.com>
2020-12-14 17:51:40 +08:00
Terry Kim a84c8d842c [SPARK-33751][SQL] Migrate ALTER VIEW ... AS command to use UnresolvedView to resolve the identifier
### What changes were proposed in this pull request?

This PR migrates `ALTER VIEW ... AS` to use `UnresolvedView` to resolve the view identifier. This allows consistent resolution rules (temp view first, etc.) to be applied for both v1/v2 commands. More info about the consistent resolution rule proposal can be found in [JIRA](https://issues.apache.org/jira/browse/SPARK-29900) or [proposal doc](https://docs.google.com/document/d/1hvLjGA8y_W_hhilpngXVub1Ebv8RsMap986nENCFnrg/edit?usp=sharing).

The `TempViewOrV1Table` extractor in `ResolveSessionCatalog.scala` can now be removed as well.

### Why are the changes needed?

To use `UnresolvedView` for view resolution.

### Does this PR introduce _any_ user-facing change?

The exception message changes if a table is found instead of view:
```
// OLD
`tab1` is not a view"
```
```
// NEW
"tab1 is a table. 'ALTER VIEW ... AS' expects a view."
```

### How was this patch tested?

Updated existing tests.

Closes #30723 from imback82/alter_view_as_statement.

Authored-by: Terry Kim <yuminkim@gmail.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
2020-12-14 08:39:01 +00:00
Linhong Liu b7c8210135 [SPARK-33142][SPARK-33647][SQL][FOLLOW-UP] Add docs and test cases
### What changes were proposed in this pull request?
Addressed comments in PR #30567, including:
1. add test case for SPARK-33647 and SPARK-33142
2. add migration guide
3. add `getRawTempView` and `getRawGlobalTempView` to return the raw view info (i.e. TemporaryViewRelation)
4. other minor code clean

### Why are the changes needed?
Code clean and more test cases

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
Existing and newly added test cases

Closes #30666 from linhongliu-db/SPARK-33142-followup.

Lead-authored-by: Linhong Liu <linhong.liu@databricks.com>
Co-authored-by: Linhong Liu <67896261+linhongliu-db@users.noreply.github.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
2020-12-14 08:31:50 +00:00
xuewei.linxuewei e7fe92f129 [SPARK-33546][SQL] Enable row format file format validation in CREATE TABLE LIKE
### What changes were proposed in this pull request?

[SPARK-33546] stated the there are three inconsistency behaviors for CREATE TABLE LIKE.

1. CREATE TABLE LIKE does not validate the user-specified hive serde. e.g., STORED AS PARQUET can't be used with ROW FORMAT SERDE.
2. CREATE TABLE LIKE requires STORED AS and ROW FORMAT SERDE to be specified together, which is not necessary.
3. CREATE TABLE LIKE does not respect the default hive serde.

This PR fix No.1, and after investigate, No.2 and No.3 turn out not to be issue.

Within Hive.

CREATE TABLE abc ... ROW FORMAT SERDE 'xxx.xxx.SerdeClass' (Without Stored as) will have
following result. Using the user specific SerdeClass and fetch default input/output format from default textfile format.

```
SerDe Library:          xxx.xxx.SerdeClass
InputFormat:            org.apache.hadoop.mapred.TextInputFormat
OutputFormat:           org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
```

But for
CREATE TABLE dst LIKE src ROW FORMAT SERDE 'xxx.xxx.SerdeClass' (Without Stored as) will just ignore user specific SerdeClass and using (input, output, serdeClass) from src table.

It's better to just throw an exception on such ambiguous behavior, so No.2 is not an issue, but in the PR, we add some comments.

For No.3, in fact, CreateTableLikeCommand is using following logical to try to follow src table's storageFormat if current fileFormat.inputFormat is empty

```
val newStorage = if (fileFormat.inputFormat.isDefined) {
      fileFormat
    } else {
      sourceTableDesc.storage.copy(locationUri = fileFormat.locationUri)
    }
```

If we try to fill the new target table with HiveSerDe.getDefaultStorage if file format and row format is not explicity spefified, it will break the CREATE TABLE LIKE semantic.

### Why are the changes needed?

Bug Fix.

### Does this PR introduce any user-facing change?

No.

### How was this patch tested?

Added UT and Existing UT.

Closes #30705 from leanken/leanken-SPARK-33546.

Authored-by: xuewei.linxuewei <xuewei.linxuewei@alibaba-inc.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
2020-12-14 08:27:18 +00:00
Max Gekk 817f58ddcb [SPARK-33768][SQL] Remove retainData from AlterTableDropPartition
### What changes were proposed in this pull request?
Remove the `retainData` parameter from the logical node `AlterTableDropPartition`.

### Why are the changes needed?
The `AlterTableDropPartition` command reflects the sql statement (see SqlBase.g4):
```
    | ALTER (TABLE | VIEW) multipartIdentifier
        DROP (IF EXISTS)? partitionSpec (',' partitionSpec)* PURGE?    #dropTablePartitions
```
but Spark doesn't allow to specify data retention. So, the parameter can be removed to improve code maintenance.

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
By running the test suite `DDLParserSuite`.

Closes #30748 from MaxGekk/remove-retainData.

Authored-by: Max Gekk <max.gekk@gmail.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
2020-12-14 08:16:33 +00:00
Max Gekk 9160d59ae3 [SPARK-33770][SQL][TESTS] Fix the ALTER TABLE .. DROP PARTITION tests that delete files out of partition path
### What changes were proposed in this pull request?
Modify the tests that add partitions with `LOCATION`, and where the number of nested folders in `LOCATION` doesn't match to the number of partitioned columns. In that case, `ALTER TABLE .. DROP PARTITION` tries to access (delete) folder out of the "base" path in `LOCATION`.

The problem belongs to Hive's MetaStore method `drop_partition_common`:
8696c82d07/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java (L4876)
which tries to delete empty partition sub-folders recursively starting from the most deeper partition sub-folder up to the base folder. In the case when the number of sub-folder is not equal to the number of partitioned columns `part_vals.size()`, the method will try to list and delete folders out of the base path.

### Why are the changes needed?
To fix test failures like https://github.com/apache/spark/pull/30643#issuecomment-743774733:
```
org.apache.spark.sql.hive.execution.command.AlterTableAddPartitionSuite.ALTER TABLE .. ADD PARTITION Hive V1: SPARK-33521: universal type conversions of partition values
sbt.ForkMain$ForkError: org.apache.spark.sql.AnalysisException: org.apache.hadoop.hive.ql.metadata.HiveException: File file:/home/jenkins/workspace/SparkPullRequestBuilder/target/tmp/spark-832cb19c-65fd-41f3-ae0b-937d76c07897 does not exist;
	at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:112)
	at org.apache.spark.sql.hive.HiveExternalCatalog.dropPartitions(HiveExternalCatalog.scala:1014)
...
Caused by: sbt.ForkMain$ForkError: org.apache.hadoop.hive.metastore.api.MetaException: File file:/home/jenkins/workspace/SparkPullRequestBuilder/target/tmp/spark-832cb19c-65fd-41f3-ae0b-937d76c07897 does not exist
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_partition_with_environment_context(HiveMetaStore.java:3381)
	at sun.reflect.GeneratedMethodAccessor304.invoke(Unknown Source)
```

The issue can be reproduced by the following steps:
1. Create a base folder, for example: `/Users/maximgekk/tmp/part-location`
2. Create a sub-folder in the base folder and drop permissions for it:
```
$ mkdir /Users/maximgekk/tmp/part-location/aaa
$ chmod a-rwx chmod a-rwx /Users/maximgekk/tmp/part-location/aaa
$ ls -al /Users/maximgekk/tmp/part-location
total 0
drwxr-xr-x   3 maximgekk  staff    96 Dec 13 18:42 .
drwxr-xr-x  33 maximgekk  staff  1056 Dec 13 18:32 ..
d---------   2 maximgekk  staff    64 Dec 13 18:42 aaa
```
3. Create a table with a partition folder in the base folder:
```sql
spark-sql> create table tbl (id int) partitioned by (part0 int, part1 int);
spark-sql> alter table tbl add partition (part0=1,part1=2) location '/Users/maximgekk/tmp/part-location/tbl';
```
4. Try to drop this partition:
```
spark-sql> alter table tbl drop partition (part0=1,part1=2);
20/12/13 18:46:07 ERROR HiveClientImpl:
======================
Attempt to drop the partition specs in table 'tbl' database 'default':
Map(part0 -> 1, part1 -> 2)
In this attempt, the following partitions have been dropped successfully:

The remaining partitions have not been dropped:
[1, 2]
======================

Error in query: org.apache.hadoop.hive.ql.metadata.HiveException: Error accessing file:/Users/maximgekk/tmp/part-location/aaa;
org.apache.spark.sql.AnalysisException: org.apache.hadoop.hive.ql.metadata.HiveException: Error accessing file:/Users/maximgekk/tmp/part-location/aaa;
```
The command fails because it tries to access to the sub-folder `aaa` that is out of the partition path `/Users/maximgekk/tmp/part-location/tbl`.

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
By running the affected tests from local IDEA which does not have access to folders out of partition paths.

Closes #30752 from MaxGekk/fix-drop-partition-location.

Lead-authored-by: Max Gekk <max.gekk@gmail.com>
Co-authored-by: Maxim Gekk <max.gekk@gmail.com>
Signed-off-by: HyukjinKwon <gurwls223@apache.org>
2020-12-14 15:56:46 +09:00
Kent Yao 4d47ac4b4b [SPARK-33705][SQL][TEST] Fix HiveThriftHttpServerSuite flakiness
### What changes were proposed in this pull request?
TO FIX flaky tests:

https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/132345/testReport/
```
org.apache.spark.sql.hive.thriftserver.HiveThriftHttpServerSuite.JDBC query execution
org.apache.spark.sql.hive.thriftserver.HiveThriftHttpServerSuite.Checks Hive version
org.apache.spark.sql.hive.thriftserver.HiveThriftHttpServerSuite.SPARK-24829 Checks cast as float
```

The root cause here is a jar conflict issue.
`NewCookie.isHttpOnly` is not defined in the `jsr311-api.jar` which conflicts
The transitive artifact `jsr311-api.jar` of `hadoop-client` is excluded at the maven side. See https://issues.apache.org/jira/browse/SPARK-27179.

The Jenkins PR builder and Github Action use `SBT` as the compiler tool.

First, the exclusion rule from maven is not followed by sbt, so I was able to see `jsr311-api.jar` from maven cache to be added to the classpath directly. **This seems to be a  bug of `sbt-pom-reader` plugin but I'm not that sure.**

Then I added an `ExcludeRule` for the `hive-thriftserver` module at the SBT side and did see the `jsr311-api.jar` gone, but the CI jobs still failed with the same error.

I added a trace log in ThriftHttpServlet

```s
ERROR ThriftHttpServlet: !!!!!!!!! Suspect???????? --->
file:/home/jenkins/workspace/SparkPullRequestBuilder/assembly/target/scala-2.12/jars/jsr311-api-1.1.1.jar
```
And the log pointed out that the assembly phase copied it to `assembly/target/scala-2.12/jars/` which will be added to the classpath too. With the help of SBT `dependencyTree` tool, I saw the `jsr311-api` again as a transitive of `jersery-core` from `yarn` module with a `test` scope. So **This seems to be another bug from the SBT side of the `sbt-assembly` plugin.**  It copied a test scope transitive artifact to the assembly output.

In this PR, I defined some rules in SparkBuild.scala to bypass the potential bugs from the SBT side.

First, exclude the `jsr311` from all over the project and then add it back separately to the YARN module for SBT.

Additionally, the HiveThriftServerSuites was reflected for reducing flakiness too, but not related to the bugs I have found so far.

### Why are the changes needed?

fix test here

### Does this PR introduce _any_ user-facing change?

NO
### How was this patch tested?

passing jenkins and ga

Closes #30643 from yaooqinn/HiveThriftHttpServerSuite.

Authored-by: Kent Yao <yaooqinn@hotmail.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
2020-12-14 05:14:38 +00:00
Kousuke Saruta b135db3b1a
[SPARK-33757][INFRA][R][FOLLOWUP] Provide more simple solution
### What changes were proposed in this pull request?

This PR proposes a better solution for the R build failure on GitHub Actions.
The issue is solved in #30737 but I noticed the following two things.

* We can use the latest `usethis` if we install additional libraries on the GitHub Actions environment.
* For tests on AppVeyor, `usethis` is not necessary, so I partially revert the previous change.

### Why are the changes needed?

For more simple solution.

### Does this PR introduce _any_ user-facing change?

No.

### How was this patch tested?

Confirmed on GitHub Actions and AppVeyor on my account.

Closes #30753 from sarutak/followup-SPARK-33757.

Authored-by: Kousuke Saruta <sarutak@oss.nttdata.com>
Signed-off-by: Dongjoon Hyun <dongjoon@apache.org>
2020-12-13 17:27:39 -08:00
Gengliang Wang 6e862792fb [SPARK-33723][SQL] ANSI mode: Casting String to Date should throw exception on parse error
### What changes were proposed in this pull request?

Currently, when casting a string as timestamp type in ANSI mode, Spark throws a runtime exception on parsing error.
However, the result for casting a string to date is always null. We should throw an exception on parsing error as well.

### Why are the changes needed?

Add missing feature for ANSI mode

### Does this PR introduce _any_ user-facing change?

Yes for ANSI mode, Casting string to date will throw an exception on parsing error

### How was this patch tested?

Unit test

Closes #30687 from gengliangwang/castDate.

Authored-by: Gengliang Wang <gengliang.wang@databricks.com>
Signed-off-by: HyukjinKwon <gurwls223@apache.org>
2020-12-14 10:22:37 +09:00