Commit graph

25423 commits

Author SHA1 Message Date
Wenchen Fan 9407fba037 [SPARK-29412][SQL] refine the document of v2 session catalog config
### What changes were proposed in this pull request?

Refine the document of v2 session catalog config, to clearly explain what it is, when it should be used and how to implement it.

### Why are the changes needed?

Make this config more understandable

### Does this PR introduce any user-facing change?

No

### How was this patch tested?

Pass the Jenkins with the newly updated test cases.

Closes #26071 from cloud-fan/config.

Authored-by: Wenchen Fan <wenchen@databricks.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
2019-10-15 10:18:58 +08:00
herman 1f1443ebb2 [SPARK-29347][SQL] Add JSON serialization for external Rows
### What changes were proposed in this pull request?
This PR adds JSON serialization for Spark external Rows.

### Why are the changes needed?
This is to be used for observable metrics where the `StreamingQueryProgress` contains a map of observed metrics rows which needs to be serialized in some cases.

### Does this PR introduce any user-facing change?
Yes, a user can call `toJson` on rows returned when collecting a DataFrame to the driver.

### How was this patch tested?
Added a new test suite: `RowJsonSuite` that should test this.

Closes #26013 from hvanhovell/SPARK-29347.

Authored-by: herman <herman@databricks.com>
Signed-off-by: herman <herman@databricks.com>
2019-10-15 00:24:17 +02:00
Dongjoon Hyun ff9fcd501c Revert "[SPARK-29107][SQL][TESTS] Port window.sql (Part 1)"
This reverts commit 81915dacc4.
2019-10-14 15:15:32 -07:00
Wenchen Fan bfa09cf049 [SPARK-29463][SQL] move v2 commands to a new file
### What changes were proposed in this pull request?

move the v2 command logical plans from `basicLogicalOperators.scala` to a new file `v2Commands.scala`

### Why are the changes needed?

As we keep adding v2 commands, the `basicLogicalOperators.scala` grows bigger and bigger. It's better to have a separated file for them.

### Does this PR introduce any user-facing change?

no

### How was this patch tested?

not needed

Closes #26111 from cloud-fan/command.

Authored-by: Wenchen Fan <wenchen@databricks.com>
Signed-off-by: Dongjoon Hyun <dhyun@apple.com>
2019-10-14 14:09:21 -07:00
Thomas Graves a42d894a40 [SPARK-29417][CORE] Resource Scheduling - add TaskContext.resource java api
### What changes were proposed in this pull request?
We added a TaskContext.resources() api, but I realized this is returning a scala Map which is not ideal for access from Java.  Here I add a resourcesJMap function which returns a java.util.Map to make it easily accessible from Java.

### Why are the changes needed?
Java API access

### Does this PR introduce any user-facing change?
<!--
If yes, please clarify the previous behavior and the change this PR proposes - provide the console output, description and/or an example to show the behavior difference if possible.
If no, write 'No'.
-->
Yes, new TaskContext function to access from Java

### How was this patch tested?
<!--
If tests were added, say they were added here. Please make sure to add some test cases that check the changes thoroughly including negative and positive cases if possible.
If it was tested in a way different from regular unit tests, please clarify how you tested step by step, ideally copy and paste-able, so that other reviewers can test and check, and descendants can verify in the future.
If tests were not added, please describe why they were not added and/or why it was difficult to add.
-->
new unit test

Closes #26083 from tgravescs/SPARK-29417.

Lead-authored-by: Thomas Graves <tgraves@ngvpn01-168-221.dyn.scz.us.nvidia.com>
Co-authored-by: Thomas Graves <tgraves@TGRAVES-MLT.local>
Co-authored-by: Thomas Graves <tgraves@nvidia.com>
Signed-off-by: Xiangrui Meng <meng@databricks.com>
2019-10-14 13:27:34 -07:00
Ilan Filonenko 52186afd84 [SPARK-25152][K8S] Enable SparkR Integration Tests for Kubernetes
## What changes were proposed in this pull request?

Re-introduced SparkR integration tests as part of the SparkR on K8S release. This PR awaits Jenkins availability.

## How was this patch tested?

This patch was tested with unit tests and integration tests.

Closes #22145 from ifilonenko/spark-r-with-tests.

Authored-by: Ilan Filonenko <if56@cornell.edu>
Signed-off-by: shane knapp <incomplete@gmail.com>
2019-10-14 13:25:54 -07:00
Dongjoon Hyun e696c36e32 [SPARK-29442][SQL] Set default mode should override the existing mode
### What changes were proposed in this pull request?

This PR aims to fix the behavior of `mode("default")` to set `SaveMode.ErrorIfExists`. Also, this PR updates the exception message by adding `default` explicitly.

### Why are the changes needed?

This is reported during `GRAPH API` PR. This builder pattern should work like the documentation.

### Does this PR introduce any user-facing change?

Yes if the app has multiple `mode()` invocation including `mode("default")` and the `mode("default")` is the last invocation. This is really a corner case.
- Previously, the last invocation was handled as `No-Op`.
- After this bug fix, it will work like the documentation.

### How was this patch tested?

Pass the Jenkins with the newly added test case.

Closes #26094 from dongjoon-hyun/SPARK-29442.

Authored-by: Dongjoon Hyun <dhyun@apple.com>
Signed-off-by: Dongjoon Hyun <dhyun@apple.com>
2019-10-14 13:11:05 -07:00
DylanGuedes 81915dacc4 [SPARK-29107][SQL][TESTS] Port window.sql (Part 1)
### What changes were proposed in this pull request?

This PR ports window.sql from PostgreSQL regression tests https://github.com/postgres/postgres/blob/REL_12_BETA2/src/test/regress/sql/window.sql from lines 1~319

The expected results can be found in the link: https://github.com/postgres/postgres/blob/REL_12_BETA2/src/test/regress/expected/window.out

## How was this patch tested?
Pass the Jenkins.

### Why are the changes needed?
To ensure compatibility with PGSQL

### Does this PR introduce any user-facing change?
No

### How was this patch tested?
Comparison with PgSQL results.

Closes #25816 from DylanGuedes/spark-29107.

Authored-by: DylanGuedes <djmgguedes@gmail.com>
Signed-off-by: Dongjoon Hyun <dhyun@apple.com>
2019-10-14 10:17:16 -07:00
sandeep katta ba04562234 [SPARK-29435][CORE] MapId in Shuffle Block is inconsistent at the writer and reader part when spark.shuffle.useOldFetchProtocol=true
### What changes were proposed in this pull request?
Shuffle Block Construction during Shuffle Write and Read is wrong

Shuffle Map Task  (Shuffle Write)
19/10/11 22:07:32| ERROR| [Executor task launch worker for task 3] org.apache.spark.shuffle.IndexShuffleBlockResolver: ####### For Debug ############ /tmp/hadoop-root1/nm-local-dir/usercache/root1/appcache/application_1570422377362_0008/blockmgr-6d03250d-6e7c-4bc2-bbb7-22b8e3981c35/0d/**shuffle_0_3_0.index**

Result Task (Shuffle Read)
19/10/11 22:07:32| ERROR| [Executor task launch worker for task 6] org.apache.spark.storage.ShuffleBlockFetcherIterator: Error occurred while fetching local blocks
java.nio.file.NoSuchFileException: /tmp/hadoop-root1/nm-local-dir/usercache/root1/appcache/application_1570422377362_0008/blockmgr-6d03250d-6e7c-4bc2-bbb7-22b8e3981c35/30/**shuffle_0_0_0.index**
	at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)

As per [SPARK-25341](https://issues.apache.org/jira/browse/SPARK-25341) `mapId` of `SortShuffleManager.getWriter `changed to `context.taskAttemptId() ` from `partitionId`

[code]( https://github.com/apache/spark/pull/25620/files#diff-363c53ca5a72cfdc37dac4a723309638R54)

But `MapOutputTracker.convertMapStatuses` returns the wrong ShuffleBlock, if `spark.shuffle.useOldFetchProtocol `enabled, it returns `paritionId` as `mapID` which is wrong . [Code](https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/MapOutputTracker.scala#L912)

### Why are the changes needed?
Already MapStatus returned by the ShuffleWriter has the mapId for e.g.[ code here](https://github.com/apache/spark/blob/master/core/src/main/java/org/apache/spark/shuffle/sort/BypassMergeSortShuffleWriter.java#L134). So it's nice to use `status.mapTaskId`

### Does this PR introduce any user-facing change?
No

### How was this patch tested?
Existing UT  and manually tested with `spark.shuffle.useOldFetchProtocol` as true and false

![image](https://user-images.githubusercontent.com/35216143/66716530-4f4caa80-edec-11e9-833d-7131a9fbd442.png)

Closes #26095 from sandeep-katta/shuffleIssue.

Lead-authored-by: sandeep katta <sandeep.katta2007@gmail.com>
Co-authored-by: Yuanjian Li <xyliyuanjian@gmail.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
2019-10-15 00:03:13 +08:00
Huaxin Gao cfcaf528cd [SPARK-29381][PYTHON][ML] Add _ before the XXXParams classes
### What changes were proposed in this pull request?
Add _ before XXXParams classes to indicate internal usage

### Why are the changes needed?
Follow the PEP 8 convention to use _single_leading_underscore to indicate internal use

### Does this PR introduce any user-facing change?
No

### How was this patch tested?
use existing tests

Closes #26103 from huaxingao/spark-29381.

Authored-by: Huaxin Gao <huaxing@us.ibm.com>
Signed-off-by: Sean Owen <sean.owen@databricks.com>
2019-10-14 10:52:23 -05:00
Maxim Gekk da576a737c [SPARK-29369][SQL] Support string intervals without the interval prefix
### What changes were proposed in this pull request?
In the PR, I propose to move interval parsing to `CalendarInterval.fromCaseInsensitiveString()` which throws an `IllegalArgumentException` for invalid strings, and reuse it from `CalendarInterval.fromString()`. The former one handles `IllegalArgumentException` only and returns `NULL` for invalid interval strings. This will allow to support interval strings without the `interval` prefix in casting strings to intervals and in interval type constructor because they use `fromString()` for parsing string intervals.

For example:
```sql
spark-sql> select cast('1 year 10 days' as interval);
interval 1 years 1 weeks 3 days
spark-sql> SELECT INTERVAL '1 YEAR 10 DAYS';
interval 1 years 1 weeks 3 days
```

### Why are the changes needed?
To maintain feature parity with PostgreSQL which supports interval strings without prefix:
```sql
# select interval '2 months 1 microsecond';
        interval
------------------------
 2 mons 00:00:00.000001
```
and to improve Spark SQL UX.

### Does this PR introduce any user-facing change?
Yes, previously parsing of interval strings without `interval` gives `NULL`:
```sql
spark-sql> select interval '2 months 1 microsecond';
NULL
```
After:
```sql
spark-sql> select interval '2 months 1 microsecond';
interval 2 months 1 microseconds
```

### How was this patch tested?
- Added new tests to `CalendarIntervalSuite.java`
- A test for casting strings to intervals in `CastSuite`
- Test for interval type constructor from strings in `ExpressionParserSuite`

Closes #26079 from MaxGekk/interval-str-without-prefix.

Authored-by: Maxim Gekk <max.gekk@gmail.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
2019-10-14 23:34:18 +08:00
Terry Kim ef6dce29b2 [SPARK-29279][SQL] Merge SHOW NAMESPACES and SHOW DATABASES code path
### What changes were proposed in this pull request?
Currently,  `SHOW NAMESPACES` and `SHOW DATABASES` are separate code paths. This PR merges two implementations.

### Why are the changes needed?
To remove code/behavior duplication

### Does this PR introduce any user-facing change?
No

### How was this patch tested?
Added new unit tests.

Closes #26006 from imback82/combine_show.

Authored-by: Terry Kim <yuminkim@gmail.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
2019-10-14 22:35:26 +08:00
Huaxin Gao 67e1360bad [SPARK-29377][PYTHON][ML] Parity between Scala ML tuning and Python ML tuning
### What changes were proposed in this pull request?
Follow Scala ml tuning implementation

- put leading underscore before python ```ValidatorParams``` to indicate private
- add ```_CrossValidatorParams``` and ```_TrainValidationSplitParams```
- separate the getters and setters. Put getters in _XXXParams and setters in the Classes.

### Why are the changes needed?
Keep parity between scala and python

### Does this PR introduce any user-facing change?
add ```CrossValidatorModel.getNumFolds``` and ```TrainValidationSplitModel.getTrainRatio()```

### How was this patch tested?
Add doctest

Closes #26057 from huaxingao/spark-tuning.

Authored-by: Huaxin Gao <huaxing@us.ibm.com>
Signed-off-by: zhengruifeng <ruifengz@foxmail.com>
2019-10-14 14:28:31 +08:00
angerszhu ef81525a1a [SPARK-29308][BUILD] Update deps in dev/deps/spark-deps-hadoop-3.2 for hadoop-3.2
### What changes were proposed in this pull request?

Current dev/deps/spark-deps-hadoop-3.2 have some wrong deps,   it's caused by `dev/test-dependencies.sh ` when build assembly dependencies.
add maven compile parameter `-am` to make it build with all deps, and get right result.

And update NOTICE-binary & NOTICE-binary for updated result.

### Why are the changes needed?
Update dev/deps/spark-hadoop-3.2

### Does this PR introduce any user-facing change?
No

### How was this patch tested?
N/A

Closes #25984 from AngersZhuuuu/SPARK=29308.

Authored-by: angerszhu <angers.zhu@gmail.com>
Signed-off-by: Sean Owen <sean.owen@databricks.com>
2019-10-13 12:53:12 -05:00
Yuming Wang 148cd26799 [SPARK-26321][SQL] Port HIVE-15297: Hive should not split semicolon within quoted string literals
## What changes were proposed in this pull request?

This pr port [HIVE-15297](https://issues.apache.org/jira/browse/HIVE-15297) to fix **spark-sql** should not split semicolon within quoted string literals.

## How was this patch tested?
unit tests and manual tests:
![image](https://user-images.githubusercontent.com/5399861/60395592-5666ea00-9b68-11e9-99dc-0e8ea98de32b.png)

Closes #25018 from wangyum/SPARK-26321.

Authored-by: Yuming Wang <yumwang@ebay.com>
Signed-off-by: Yuming Wang <wgyumg@gmail.com>
2019-10-12 22:21:14 -07:00
Peter Toth 9e12c94c15 [SPARK-29359][SQL][TESTS] Better exception handling in (SQL|ThriftServer)QueryTestSuite
### What changes were proposed in this pull request?
This PR adds 2 changes regarding exception handling in `SQLQueryTestSuite` and `ThriftServerQueryTestSuite`
- fixes an expected output sorting issue in `ThriftServerQueryTestSuite` as if there is an exception then there is no need for sort
- introduces common exception handling in those 2 suites with a new `handleExceptions` method

### Why are the changes needed?

Currently `ThriftServerQueryTestSuite` passes on master, but it fails on one of my PRs (https://github.com/apache/spark/pull/23531) with this error  (https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/111651/testReport/org.apache.spark.sql.hive.thriftserver/ThriftServerQueryTestSuite/sql_3/):
```
org.scalatest.exceptions.TestFailedException: Expected "
[Recursion level limit 100 reached but query has not exhausted, try increasing spark.sql.cte.recursion.level.limit
org.apache.spark.SparkException]
", but got "
[org.apache.spark.SparkException
Recursion level limit 100 reached but query has not exhausted, try increasing spark.sql.cte.recursion.level.limit]
" Result did not match for query #4 WITH RECURSIVE r(level) AS (   VALUES (0)   UNION ALL   SELECT level + 1 FROM r ) SELECT * FROM r
```
The unexpected reversed order of expected output (error message comes first, then the exception class) is due to this line: https://github.com/apache/spark/pull/26028/files#diff-b3ea3021602a88056e52bf83d8782de8L146. It should not sort the expected output if there was an error during execution.

### Does this PR introduce any user-facing change?
No.

### How was this patch tested?
Existing UTs.

Closes #26028 from peter-toth/SPARK-29359-better-exception-handling.

Authored-by: Peter Toth <peter.toth@gmail.com>
Signed-off-by: Yuming Wang <wgyumg@gmail.com>
2019-10-12 22:17:37 -07:00
Dongjoon Hyun abba53e78b [SPARK-27831][FOLLOWUP][SQL][TEST] ADDITIONAL_REMOTE_REPOSITORIES is a comma-delimited string
### What changes were proposed in this pull request?

This PR is a very minor follow-up to become robust because `spark.sql.additionalRemoteRepositories` is a configuration which has a comma-separated value.

### Why are the changes needed?

This makes sure that `getHiveContribJar` will not fail on the configuration changes.

### Does this PR introduce any user-facing change?

No.

### How was this patch tested?

Manual. Change the default value with multiple repositories and run the following.
```
build/sbt -Phive "project hive" "test-only org.apache.spark.sql.hive.HiveSparkSubmitSuite"
```

Closes #26096 from dongjoon-hyun/SPARK-27831.

Authored-by: Dongjoon Hyun <dhyun@apple.com>
Signed-off-by: Yuming Wang <wgyumg@gmail.com>
2019-10-12 18:47:28 -07:00
Maxim Gekk d193248205 [SPARK-29368][SQL][TEST] Port interval.sql
### What changes were proposed in this pull request?
This PR is to port interval.sql from PostgreSQL regression tests: https://raw.githubusercontent.com/postgres/postgres/REL_12_STABLE/src/test/regress/sql/interval.sql

The expected results can be found in the link: https://github.com/postgres/postgres/blob/REL_12_STABLE/src/test/regress/expected/interval.out

When porting the test cases, found PostgreSQL specific features below that do not exist in Spark SQL:
- [SPARK-29369](https://issues.apache.org/jira/browse/SPARK-29369): Accept strings without `interval` prefix in casting to intervals
- [SPARK-29370](https://issues.apache.org/jira/browse/SPARK-29370): Interval strings without explicit unit markings
- [SPARK-29371](https://issues.apache.org/jira/browse/SPARK-29371): Support interval field values with fractional parts
- [SPARK-29382](https://issues.apache.org/jira/browse/SPARK-29382): Support the `INTERVAL` type by Parquet datasource
- [SPARK-29383](https://issues.apache.org/jira/browse/SPARK-29383): Support the optional prefix `` in interval strings
- [SPARK-29384](https://issues.apache.org/jira/browse/SPARK-29384): Support `ago` in interval strings
- [SPARK-29385](https://issues.apache.org/jira/browse/SPARK-29385): Make `INTERVAL` values comparable
- [SPARK-29386](https://issues.apache.org/jira/browse/SPARK-29386): Copy data between a file and a table
- [SPARK-29387](https://issues.apache.org/jira/browse/SPARK-29387): Support `*` and `\` operators for intervals
- [SPARK-29388](https://issues.apache.org/jira/browse/SPARK-29388): Construct intervals from the `millenniums`, `centuries` or `decades` units
- [SPARK-29389](https://issues.apache.org/jira/browse/SPARK-29389): Support synonyms for interval units
- [SPARK-29390](https://issues.apache.org/jira/browse/SPARK-29390): Add the justify_days(), justify_hours() and justify_interval() functions
- [SPARK-29391](https://issues.apache.org/jira/browse/SPARK-29391): Default year-month units
- [SPARK-29393](https://issues.apache.org/jira/browse/SPARK-29393): Add the make_interval() function
- [SPARK-29394](https://issues.apache.org/jira/browse/SPARK-29394): Support ISO 8601 format for intervals
- [SPARK-29395](https://issues.apache.org/jira/browse/SPARK-29395): Precision of the interval type
- [SPARK-29406](https://issues.apache.org/jira/browse/SPARK-29406): Interval output styles
- [SPARK-29407](https://issues.apache.org/jira/browse/SPARK-29407): Support syntax for zero interval
- [SPARK-29408](https://issues.apache.org/jira/browse/SPARK-29408): Support interval literal with negative sign `-`

### Why are the changes needed?
To improve the test coverage, see https://issues.apache.org/jira/browse/SPARK-27763

### Does this PR introduce any user-facing change?
No

### How was this patch tested?
By manually comparing Spark results with PostgreSQL

Closes #26055 from MaxGekk/port-interval-sql.

Authored-by: Maxim Gekk <max.gekk@gmail.com>
Signed-off-by: Dongjoon Hyun <dhyun@apple.com>
2019-10-12 17:44:40 -07:00
Pablo 782a94d289 [SPARK-29433][WEBUI] Fix tooltip stages table
### What changes were proposed in this pull request?
In the Web UI, Stages table, the tool tip of Input and output column are not corrrect.

Actual tooltip messages:

Bytes and records read from Hadoop or from Spark storage.
Bytes and records written to Hadoop.
In this column we are only showing bytes, not records

![image](https://user-images.githubusercontent.com/12819544/66608286-85a0e480-ebb6-11e9-812a-9760bea53664.png)
![image](https://user-images.githubusercontent.com/12819544/66608323-96515a80-ebb6-11e9-9e5f-e3f2cc99a3b3.png)
![image](https://user-images.githubusercontent.com/12819544/66608450-de707d00-ebb6-11e9-84e3-0917b5cfe6f6.png)
![image](https://user-images.githubusercontent.com/12819544/66608468-eaf4d580-ebb6-11e9-8c5b-2a9a290bea9c.png)

### Why are the changes needed?
Simple correction of a tooltip

### Does this PR introduce any user-facing change?
Yes, tooltip correction

### How was this patch tested?
Manual testing

Closes #26084 from planga82/feature/SPARK-29433_Tooltip_correction.

Lead-authored-by: Pablo <soypab@gmail.com>
Co-authored-by: Unknown <soypab@gmail.com>
Signed-off-by: Sean Owen <sean.owen@databricks.com>
2019-10-12 10:34:29 -05:00
Fokko Driesprong b5b1b69f79 [SPARK-29445][CORE] Bump netty-all from 4.1.39.Final to 4.1.42.Final
### What changes were proposed in this pull request?

Minor version bump of Netty to patch reported CVE.

Patches: https://www.cvedetails.com/cve/CVE-2019-16869/

### Why are the changes needed?

### Does this PR introduce any user-facing change?

No

### How was this patch tested?

Compiled locally using `mvn clean install -DskipTests`

Closes #26099 from Fokko/SPARK-29445.

Authored-by: Fokko Driesprong <fokko@apache.org>
Signed-off-by: Sean Owen <sean.owen@databricks.com>
2019-10-12 09:43:16 -05:00
liucht e94abd7f16 [SPARK-29323][WEBUI] Add tooltip for The Executors Tab's column names in the Spark history server Page
### What changes were proposed in this pull request?

This PR is Adding tooltip for The Executors Tab's column names include RDD Blocks, Disk Used,Cores, Activity Tasks, Failed Tasks , Complete Tasks, Total Tasks in the history server Page.
https://issues.apache.org/jira/browse/SPARK-29323

![image](https://user-images.githubusercontent.com/28332082/66017759-b6c24a80-e50e-11e9-807b-5b076f701d2f.png)

I have modify the following code in executorspage-template.html
Before:
<th>RDD Blocks</th>
<th>Disk Used</th>
<th>Cores</th>
<th>Active Tasks</th>
<th>Failed Tasks</th>
<th>Complete Tasks</th>
<th>Total Tasks</th>

![image](https://user-images.githubusercontent.com/28332082/66018111-4ddbd200-e510-11e9-9cfc-19f3eae81e76.png)

After:

<th><span data-toggle="tooltip" data-placement="top" title="RDD Blocks">RDD Blocks</span></th>
<th><span data-toggle="tooltip" data-placement="top" title="Disk Used">Disk Used</span></th>
<th><span data-toggle="tooltip" data-placement="top" title="Cores">Cores</span></th>
<th><span data-toggle="tooltip" data-placement="top" title="Active Tasks">Active Tasks</span></th>
<th><span data-toggle="tooltip" data-placement="top" title="Failed Tasks">Failed Tasks</span></th>
<th><span data-toggle="tooltip" data-placement="top" title="Complete Tasks">Complete Tasks</span></th>
<th><span data-toggle="tooltip" data-placement="top" title="Total Tasks">Total Tasks</span></th>

![image](https://user-images.githubusercontent.com/28332082/66018157-79f75300-e510-11e9-96ba-6230aa0940c7.png)

### Why are the changes needed?

the spark Executors of history Tab page, the Summary part shows the line in the list of title, but format is irregular.
Some column names have tooltip, such as Storage Memory, Task Time(GC Time), Input, Shuffle Read,
Shuffle Write and Blacklisted, but there are still some list names that have not tooltip. They are RDD Blocks, Disk Used,Cores, Activity Tasks, Failed Tasks , Complete Tasks and Total Tasks. oddly, Executors section below,All the column names Contains the column names above have tooltip .
It's important for open source projects to have consistent style and user-friendly UI, and I'm working on keeping it consistent And more user-friendly.

### Does this PR introduce any user-facing change?

No.

### How was this patch tested?

Manual tests for Chrome, Firefox and Safari

Authored-by: liucht-inspur <liuchtinspur.com>
Signed-off-by: liucht-inspur <liuchtinspur.com>

Closes #25994 from liucht-inspur/master.

Authored-by: liucht <liucht@inspur.com>
Signed-off-by: Sean Owen <sean.owen@databricks.com>
2019-10-12 09:40:31 -05:00
Maxim Gekk f302c2ee62 [SPARK-29328][SQL][FOLLOWUP] Revert calculation of mean seconds per month
### What changes were proposed in this pull request?

Revert this commit 18b7ad2fc5.

### Why are the changes needed?
See https://github.com/apache/spark/pull/16304#discussion_r92753590

### Does this PR introduce any user-facing change?
Yes

### How was this patch tested?
There is no test for that.

Closes #26101 from MaxGekk/revert-mean-seconds-per-month.

Authored-by: Maxim Gekk <max.gekk@gmail.com>
Signed-off-by: Sean Owen <sean.owen@databricks.com>
2019-10-12 09:38:08 -05:00
zhengruifeng 8b62399684 [SPARK-29380][ML] RFormula avoid repeated 'first' jobs to get vector size
### What changes were proposed in this pull request?
get the first row lazily, and reuse it for each vector column.

### Why are the changes needed?
avoid unnecssary `first` jobs

### Does this PR introduce any user-facing change?
no

### How was this patch tested?
existing testsuites & local tests in repl

Closes #26052 from zhengruifeng/rformula_lazy_row.

Authored-by: zhengruifeng <ruifengz@foxmail.com>
Signed-off-by: zhengruifeng <ruifengz@foxmail.com>
2019-10-12 22:25:36 +08:00
Peter Toth 3a7126cea8 [SPARK-29410][BUILD] Update commons-beanutils to 1.9.4
### What changes were proposed in this pull request?
This PR updates commons-beanutils to 1.9.4.

### Why are the changes needed?
CVE fixed in 1.9.4: http://commons.apache.org/proper/commons-beanutils/javadocs/v1.9.4/RELEASE-NOTES.txt

### Does this PR introduce any user-facing change?
No.

### How was this patch tested?
Existing UTs.

Closes #26069 from peter-toth/SPARK-29410-update-commons-beanutils-to-1.9.4.

Authored-by: Peter Toth <peter.toth@gmail.com>
Signed-off-by: Sean Owen <sean.owen@databricks.com>
2019-10-12 09:24:06 -05:00
shivusondur aa1acfe078 [SPARK-28810][DOC][SQL] Document SHOW TABLES in SQL Reference
### What changes were proposed in this pull request?
Added the reference for SHOW TABLES sql command.

### Why are the changes needed?
To help the customer usage

### Does this PR introduce any user-facing change?
It updates the Sql command reference doc.

### How was this patch tested?
<details>
<summary> Attached the Snap</summary>

![image](https://user-images.githubusercontent.com/7912929/66623173-1eac1b80-ec08-11e9-8357-9f6323e5fc48.png)

![image](https://user-images.githubusercontent.com/7912929/65384657-87f3e980-dd42-11e9-90fa-6650ee68e005.png)

</details>

Closes #25561 from shivusondur/jiraSHOWTBLS.

Authored-by: shivusondur <shivusondur@gmail.com>
Signed-off-by: Sean Owen <sean.owen@databricks.com>
2019-10-12 09:21:44 -05:00
Huaxin Gao 81362956a7 [SPARK-29116][PYTHON][ML] Refactor py classes related to DecisionTree
### What changes were proposed in this pull request?

- Move tree related classes to a separate file ```tree.py```
- add method ```predictLeaf``` in ```DecisionTreeModel```& ```TreeEnsembleModel```

### Why are the changes needed?

- keep parity between scala and python
- easy code maintenance

### Does this PR introduce any user-facing change?
Yes
add method ```predictLeaf``` in ```DecisionTreeModel```& ```TreeEnsembleModel```
add ```setMinWeightFractionPerNode```  in ```DecisionTreeClassifier``` and ```DecisionTreeRegressor```

### How was this patch tested?
add some doc tests

Closes #25929 from huaxingao/spark_29116.

Authored-by: Huaxin Gao <huaxing@us.ibm.com>
Signed-off-by: zhengruifeng <ruifengz@foxmail.com>
2019-10-12 22:13:50 +08:00
Bryan Cutler beb8d2f8ad [SPARK-29402][PYTHON][TESTS] Added tests for grouped map pandas_udf with window
### What changes were proposed in this pull request?

Added tests for grouped map pandas_udf using a window.

### Why are the changes needed?

Current tests for grouped map do not use a window and this had previously caused an error due the window range being a struct column, which was not yet supported.

### Does this PR introduce any user-facing change?

No

### How was this patch tested?

New tests added.

Closes #26063 from BryanCutler/pyspark-pandas_udf-group-with-window-tests-SPARK-29402.

Authored-by: Bryan Cutler <cutlerb@gmail.com>
Signed-off-by: Bryan Cutler <cutlerb@gmail.com>
2019-10-11 16:19:13 -07:00
Bryan Cutler 6390f02f9f [SPARK-29367][DOC] Add compatibility note for Arrow 0.15.0 to SQL guide
### What changes were proposed in this pull request?

Add documentation to SQL programming guide to use PyArrow >= 0.15.0 with current versions of Spark.

### Why are the changes needed?

Arrow 0.15.0 introduced a change in format which requires an environment variable to maintain compatibility.

### Does this PR introduce any user-facing change?

No

### How was this patch tested?

Ran pandas_udfs tests using PyArrow 0.15.0 with environment variable set.

Closes #26045 from BryanCutler/arrow-document-legacy-IPC-fix-SPARK-29367.

Authored-by: Bryan Cutler <cutlerb@gmail.com>
Signed-off-by: HyukjinKwon <gurwls223@apache.org>
2019-10-11 09:19:34 +09:00
Luca Canali 2b3c3793c9 [SPARK-29032][FOLLOWUP][DOCS] Add PrometheusServlet in the monitoring documentation
This adds an entry about PrometheusServlet to the documentation, following SPARK-29032

### Why are the changes needed?

The monitoring documentation lists all the available metrics sinks, this should be added to the list for completeness.

Closes #26081 from LucaCanali/FollowupSpark29032.

Authored-by: Luca Canali <luca.canali@cern.ch>
Signed-off-by: Dongjoon Hyun <dhyun@apple.com>
2019-10-10 08:57:53 -07:00
Dongjoon Hyun e946104c42 [SPARK-29400][CORE] Improve PrometheusResource to use labels
### What changes were proposed in this pull request?

[SPARK-29064](https://github.com/apache/spark/pull/25770) introduced `PrometheusResource` to expose `ExecutorSummary`. This PR aims to improve it further more `Prometheus`-friendly to use [Prometheus labels](https://prometheus.io/docs/concepts/data_model/#metric-names-and-labels).

### Why are the changes needed?

**BEFORE**
```
metrics_app_20191008151432_0000_driver_executor_rddBlocks_Count 0
metrics_app_20191008151432_0000_driver_executor_memoryUsed_Count 0
metrics_app_20191008151432_0000_driver_executor_diskUsed_Count 0
```

**AFTER**
```
$ curl -s http://localhost:4040/metrics/executors/prometheus/ | head -n3
metrics_executor_rddBlocks_Count{application_id="app-20191008151625-0000", application_name="Spark shell", executor_id="driver"} 0
metrics_executor_memoryUsed_Count{application_id="app-20191008151625-0000", application_name="Spark shell", executor_id="driver"} 0
metrics_executor_diskUsed_Count{application_id="app-20191008151625-0000", application_name="Spark shell", executor_id="driver"} 0
```

### Does this PR introduce any user-facing change?

No, but `Prometheus` understands the new format and shows more intelligently.

<img width="735" alt="ui" src="https://user-images.githubusercontent.com/9700541/66438279-1756f900-e9e1-11e9-91c7-c04c6ce9172f.png">

### How was this patch tested?

Manually.

**SETUP**
```
$ sbin/start-master.sh
$ sbin/start-slave.sh spark://`hostname`:7077
$ bin/spark-shell --master spark://`hostname`:7077 --conf spark.ui.prometheus.enabled=true
```

**RESULT**
```
$ curl -s http://localhost:4040/metrics/executors/prometheus/ | head -n3
metrics_executor_rddBlocks_Count{application_id="app-20191008151625-0000", application_name="Spark shell", executor_id="driver"} 0
metrics_executor_memoryUsed_Count{application_id="app-20191008151625-0000", application_name="Spark shell", executor_id="driver"} 0
metrics_executor_diskUsed_Count{application_id="app-20191008151625-0000", application_name="Spark shell", executor_id="driver"} 0
```

Closes #26060 from dongjoon-hyun/SPARK-29400.

Authored-by: Dongjoon Hyun <dhyun@apple.com>
Signed-off-by: Dongjoon Hyun <dhyun@apple.com>
2019-10-10 08:47:12 -07:00
Gengliang Wang 6edabeb0ee [SPARK-28989][SQL][FOLLOWUP] Update ANSI mode related config names in comments
### What changes were proposed in this pull request?

Update ANSI mode related config names in comments as "spark.sql.ansi.enabled"

### Why are the changes needed?

The removed configuration `spark.sql.parser.ansi.enabled` and `spark.sql.failOnIntegralTypeOverflow` still exist in code comments.

### Does this PR introduce any user-facing change?

No
### How was this patch tested?

Grep the whole code to ensure the remove config names no longer exist.
```
git grep "parser.ansi.enabled"
git grep failOnIntegralTypeOverflow
git grep decimalOperationsNullOnOverflow
git grep ANSI_SQL_PARSER
git grep FAIL_ON_INTEGRAL_TYPE_OVERFLOW
git grep DECIMAL_OPERATIONS_NULL_ON_OVERFLOW
```

Closes #26067 from gengliangwang/spark-28989-followup.

Authored-by: Gengliang Wang <gengliang.wang@databricks.com>
Signed-off-by: Dongjoon Hyun <dhyun@apple.com>
2019-10-09 19:17:54 -07:00
HyukjinKwon 7ba16ffbb9 [SPARK-29403][INFRA][R] Uses Arrow R 0.14.1 in AppVeyor for now
### What changes were proposed in this pull request?

This PR proposes to use Arrow R 0.14.1 for now in AppVeyor to make tests passed.

### Why are the changes needed?

To make build passed with Arrow. It doesn't work with setting `ARROW_PRE_0_15_IPC_FORMAT` to `1` to allow Arrow R 0.15 compatibility.

### Does this PR introduce any user-facing change?

No.

### How was this patch tested?

AppVeyor

Closes #26041 from HyukjinKwon/investigate.

Authored-by: HyukjinKwon <gurwls223@apache.org>
Signed-off-by: HyukjinKwon <gurwls223@apache.org>
2019-10-10 09:01:36 +09:00
Sean Owen cc7493fa21 [SPARK-29416][CORE][ML][SQL][MESOS][TESTS] Use .sameElements to compare arrays, instead of .deep (gone in 2.13)
### What changes were proposed in this pull request?

Use `.sameElements` to compare (non-nested) arrays, as `Arrays.deep` is removed in 2.13 and wasn't the best way to do this in the first place.

### Why are the changes needed?

To compile with 2.13.

### Does this PR introduce any user-facing change?

None.

### How was this patch tested?

Existing tests.

Closes #26073 from srowen/SPARK-29416.

Authored-by: Sean Owen <sean.owen@databricks.com>
Signed-off-by: Dongjoon Hyun <dhyun@apple.com>
2019-10-09 17:00:48 -07:00
Sean Owen 4d93fb7dfa [SPARK-29413][CORE] Rewrite ThreadUtils.parmap to avoid TraversableLike, gone in 2.13
### What changes were proposed in this pull request?

Rewrite declaration of internal `ThreadUtils.parmap` method to avoid `TraversableLike`, which is removed in Scala 2.13.

### Why are the changes needed?

To compile in Scala 2.13.

### Does this PR introduce any user-facing change?

None.

### How was this patch tested?

Existing tests.

Closes #26072 from srowen/SPARK-29413.

Authored-by: Sean Owen <sean.owen@databricks.com>
Signed-off-by: Dongjoon Hyun <dhyun@apple.com>
2019-10-09 10:53:51 -07:00
Sean Owen 3b0bca42ac [SPARK-29401][FOLLOWUP] Additional cases where a .parallelize call with Array is ambiguous in 2.13
This is just a followup on https://github.com/apache/spark/pull/26062 -- see it for more detail.

I think we will eventually find more cases of this. It's hard to get them all at once as there are many different types of compile errors in earlier modules. I'm trying to address them in as a big a chunk as possible.

Closes #26074 from srowen/SPARK-29401.2.

Authored-by: Sean Owen <sean.owen@databricks.com>
Signed-off-by: Dongjoon Hyun <dhyun@apple.com>
2019-10-09 10:27:05 -07:00
Sean Owen fa95a5c395 [SPARK-29411][CORE][ML][SQL][DSTREAM] Replace use of Unit object with () for Scala 2.13
### What changes were proposed in this pull request?

Replace `Unit` with equivalent `()` where code refers to the `Unit` companion object.

### Why are the changes needed?

It doesn't compile otherwise in Scala 2.13.
- https://github.com/scala/scala/blob/v2.13.0/src/library/scala/Unit.scala#L30

### Does this PR introduce any user-facing change?

Should be no behavior change at all.

### How was this patch tested?

Existing tests.

Closes #26070 from srowen/SPARK-29411.

Authored-by: Sean Owen <sean.owen@databricks.com>
Signed-off-by: Dongjoon Hyun <dhyun@apple.com>
2019-10-09 10:24:13 -07:00
herman ba4d413fc9 [SPARK-29346][SQL] Add Aggregating Accumulator
### What changes were proposed in this pull request?
This PR adds an accumulator that computes a global aggregate over a number of rows. A user can define an arbitrary number of aggregate functions which can be computed at the same time.

The accumulator uses the standard technique for implementing (interpreted) aggregation in Spark. It uses projections and manual updates for each of the aggregation steps (initialize buffer, update buffer with new input row, merge two buffers and compute the final result on the buffer). Note that two of the steps (update and merge) use the aggregation buffer both as input and output.

Accumulators do not have an explicit point at which they get serialized. A somewhat surprising side effect is that the buffers of a `TypedImperativeAggregate` go over the wire as-is instead of serializing them. The merging logic for `TypedImperativeAggregate` assumes that the input buffer contains serialized buffers, this is violated by the accumulator's implicit serialization. In order to get around this I have added `mergeBuffersObjects` method that merges two unserialized buffers to `TypedImperativeAggregate`.

### Why are the changes needed?
This is the mechanism we are going to use to implement observable metrics.

### Does this PR introduce any user-facing change?
No, not yet.

### How was this patch tested?
Added `AggregatingAccumulator` test suite.

Closes #26012 from hvanhovell/SPARK-29346.

Authored-by: herman <herman@databricks.com>
Signed-off-by: herman <herman@databricks.com>
2019-10-09 16:05:14 +02:00
Maxim Gekk c97b3ed279 [SPARK-24640][SQL][FOLLOWUP] Update the SQL migration guide about size(NULL)
### What changes were proposed in this pull request?
The commit 4e6d31f570 changed default behavior of `size()` for the `NULL` input. In this PR, I propose to update the SQL migration guide.

### Why are the changes needed?
To inform users about new behavior of the `size()` function for the `NULL` input.

### Does this PR introduce any user-facing change?
No

### How was this patch tested?
N/A

Closes #26066 from MaxGekk/size-null-migration-guide.

Authored-by: Maxim Gekk <max.gekk@gmail.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
2019-10-09 16:37:35 +08:00
Terry Kim a927f1aefc [SPARK-29373][SQL] DataSourceV2: Commands should not submit a spark job
### What changes were proposed in this pull request?
DataSourceV2 Exec classes (ShowTablesExec, ShowNamespacesExec, etc.) all extend LeafExecNode. This results in running a job when executeCollect() is called. This breaks the previous behavior [SPARK-19650](https://issues.apache.org/jira/browse/SPARK-19650).

A new command physical operator will be introduced form which all V2 Exec classes derive to avoid running a job.

### Why are the changes needed?

It is a bug since the current behavior runs a spark job, which breaks the existing behavior: [SPARK-19650](https://issues.apache.org/jira/browse/SPARK-19650).

### Does this PR introduce any user-facing change?

No

### How was this patch tested?

Existing unit tests.

Closes #26048 from imback82/dsv2_command.

Authored-by: Terry Kim <yuminkim@gmail.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
2019-10-09 11:44:25 +08:00
Sean Owen ee83d09b53 [SPARK-29401][CORE][ML][SQL][GRAPHX][TESTS] Replace calls to .parallelize Arrays of tuples, ambiguous in Scala 2.13, with Seqs of tuples
### What changes were proposed in this pull request?

Invocations like `sc.parallelize(Array((1,2)))` cause a compile error in 2.13, like:
```
[ERROR] [Error] /Users/seanowen/Documents/spark_2.13/core/src/test/scala/org/apache/spark/ShuffleSuite.scala:47: overloaded method value apply with alternatives:
  (x: Unit,xs: Unit*)Array[Unit] <and>
  (x: Double,xs: Double*)Array[Double] <and>
  (x: Float,xs: Float*)Array[Float] <and>
  (x: Long,xs: Long*)Array[Long] <and>
  (x: Int,xs: Int*)Array[Int] <and>
  (x: Char,xs: Char*)Array[Char] <and>
  (x: Short,xs: Short*)Array[Short] <and>
  (x: Byte,xs: Byte*)Array[Byte] <and>
  (x: Boolean,xs: Boolean*)Array[Boolean]
 cannot be applied to ((Int, Int), (Int, Int), (Int, Int), (Int, Int))
```
Using a `Seq` instead appears to resolve it, and is effectively equivalent.

### Why are the changes needed?

To better cross-build for 2.13.

### Does this PR introduce any user-facing change?

None.

### How was this patch tested?

Existing tests.

Closes #26062 from srowen/SPARK-29401.

Authored-by: Sean Owen <sean.owen@databricks.com>
Signed-off-by: Dongjoon Hyun <dhyun@apple.com>
2019-10-08 20:22:02 -07:00
Sean Owen 2d871ad0e7 [SPARK-29392][CORE][SQL][STREAMING] Remove symbol literal syntax 'foo, deprecated in Scala 2.13, in favor of Symbol("foo")
### What changes were proposed in this pull request?

Syntax like `'foo` is deprecated in Scala 2.13. Replace usages with `Symbol("foo")`

### Why are the changes needed?

Avoids ~50 deprecation warnings when attempting to build with 2.13.

### Does this PR introduce any user-facing change?

None, should be no functional change at all.

### How was this patch tested?

Existing tests.

Closes #26061 from srowen/SPARK-29392.

Authored-by: Sean Owen <sean.owen@databricks.com>
Signed-off-by: Dongjoon Hyun <dhyun@apple.com>
2019-10-08 20:15:37 -07:00
sandeep katta 69b0cc1962 [SPARK-28797][DOC] Document DROP FUNCTION statement in SQL Reference
### What changes were proposed in this pull request?
Add DROP FUNCTION sql description in SQL reference

### Why are the changes needed?
Currently from spark there is no complete sql guide is present, so it is better to document all the sql commands, this jira is sub part of this task.

### Does this PR introduce any user-facing change?
Yes before user cannot find any reference for drop function command in the spark docs.

After Fix:
![image](https://user-images.githubusercontent.com/35216143/66134570-240cd300-e616-11e9-9c78-259c0d355378.png)

![image](https://user-images.githubusercontent.com/35216143/65397825-d059e880-ddd0-11e9-8bd3-a65ccae56063.png)

![image](https://user-images.githubusercontent.com/35216143/66404731-9f032e80-ea06-11e9-8fef-1e266efa4c66.png)

### How was this patch tested?
tested with jekyll build

Closes #25553 from sandeep-katta/28797.

Authored-by: sandeep katta <sandeep.katta2007@gmail.com>
Signed-off-by: Sean Owen <sean.owen@databricks.com>
2019-10-08 19:47:39 -05:00
gwang3 b3eba29493 [SPARK-29189][FOLLOW-UP][SQL] Beautify config name
### What changes were proposed in this pull request?
Beautify comment

### Why are the changes needed?
The config name now is pretty weird.

### Does this PR introduce any user-facing change?
No

### How was this patch tested?
No test.

Closes #26054 from wangshisan/SPARK-29189.

Authored-by: gwang3 <gwang3@ebay.com>
Signed-off-by: Dongjoon Hyun <dhyun@apple.com>
2019-10-08 15:44:42 -07:00
Imran Rashid 0da667d314 [SPARK-28917][CORE] Synchronize access to RDD mutable state
RDD dependencies and partitions can be simultaneously
accessed and mutated by user threads and spark's scheduler threads, so
access must be thread-safe.  In particular, as partitions and
dependencies are lazily-initialized, before this change they could get
initialized multiple times, which would lead to the scheduler having an
inconsistent view of the pendings stages and get stuck.

Tested with existing unit tests.

Closes #25951 from squito/SPARK-28917.

Authored-by: Imran Rashid <irashid@cloudera.com>
Signed-off-by: Marcelo Vanzin <vanzin@cloudera.com>
2019-10-08 11:35:54 -07:00
Guilherme de360e96d7 [SPARK-29336][SQL] Fix the implementation of QuantileSummaries.merge (guarantee that the relativeError will be respected)
### What changes were proposed in this pull request?

Reimplement `org.apache.spark.sql.catalyst.util.QuantileSummaries#merge` and add a test-case showing the previous bug.

### Why are the changes needed?

The original Greenwald-Khanna paper, from which the algorithm behind `approxQuantile` was taken, does not cover how to merge the result of multiple parallel QuantileSummaries. The current implementation violates some invariants and therefore the effective error can be larger than the specified.

### Does this PR introduce any user-facing change?

Yes, for same cases, the results from `approxQuantile` (`percentile_approx` in SQL) will now be within the expected error margin. For example:

```scala
var values = (1 to 100).toArray
val all_quantiles = values.indices.map(i => (i+1).toDouble / values.length).toArray
for (n <- 0 until 5) {
  var df = spark.sparkContext.makeRDD(values).toDF("value").repartition(5)
  val all_answers = df.stat.approxQuantile("value", all_quantiles, 0.1)
  val all_answered_ranks = all_answers.map(ans => values.indexOf(ans)).toArray
  val error = all_answered_ranks.zipWithIndex.map({ case (answer, expected) => Math.abs(expected - answer) }).toArray
  val max_error = error.max
  print(max_error + "\n")
}
```

In the current build it returns:

```
16
12
10
11
17
```

I couldn't run the code with this patch applied to double check the implementation. Can someone please confirm it now outputs at most `10`, please?

### How was this patch tested?

A new unit test was added to uncover the previous bug.

Closes #26029 from sitegui/SPARK-29336.

Authored-by: Guilherme <sitegui@sitegui.com.br>
Signed-off-by: Sean Owen <sean.owen@databricks.com>
2019-10-08 08:11:10 -05:00
Maxim Gekk 4e6d31f570 [SPARK-24640][SQL] Return NULL from size(NULL) by default
### What changes were proposed in this pull request?
Set the default value of the `spark.sql.legacy.sizeOfNull` config to `false`. That changes behavior of the `size()` function for `NULL`. The function will return `NULL` for `NULL` instead of `-1`.

### Why are the changes needed?
There is the agreement in the PR https://github.com/apache/spark/pull/21598#issuecomment-399695523 to change behavior in Spark 3.0.

### Does this PR introduce any user-facing change?
Yes.
Before:
```sql
spark-sql> select size(NULL);
-1
```
After:
```sql
spark-sql> select size(NULL);
NULL
```

### How was this patch tested?
By the `check outputs of expression examples` test in `SQLQuerySuite` which runs expression examples.

Closes #26051 from MaxGekk/sizeof-null-returns-null.

Authored-by: Maxim Gekk <max.gekk@gmail.com>
Signed-off-by: HyukjinKwon <gurwls223@apache.org>
2019-10-08 20:57:10 +09:00
Dilip Biswal ef1e8495ba [SPARK-29366][SQL] Subqueries created for DPP are not printed in EXPLAIN FORMATTED
### What changes were proposed in this pull request?
The subquery expressions introduced by DPP are not printed in the newer explain command.
This PR fixes the code that computes the list of subqueries in the plan.

**SQL**
df1 and df2 are partitioned on k.
```
SELECT df1.id, df2.k
FROM df1 JOIN df2 ON df1.k = df2.k AND df2.id < 2
```

**Before**
```
|== Physical Plan ==
* Project (9)
+- * BroadcastHashJoin Inner BuildRight (8)
   :- * ColumnarToRow (2)
   :  +- Scan parquet default.df1 (1)
   +- BroadcastExchange (7)
      +- * Project (6)
         +- * Filter (5)
            +- * ColumnarToRow (4)
               +- Scan parquet default.df2 (3)

(1) Scan parquet default.df1
Output: [id#19L, k#20L]

(2) ColumnarToRow [codegen id : 2]
Input: [id#19L, k#20L]

(3) Scan parquet default.df2
Output: [id#21L, k#22L]

(4) ColumnarToRow [codegen id : 1]
Input: [id#21L, k#22L]

(5) Filter [codegen id : 1]
Input     : [id#21L, k#22L]
Condition : (isnotnull(id#21L) AND (id#21L < 2))

(6) Project [codegen id : 1]
Output    : [k#22L]
Input     : [id#21L, k#22L]

(7) BroadcastExchange
Input: [k#22L]

(8) BroadcastHashJoin [codegen id : 2]
Left keys: List(k#20L)
Right keys: List(k#22L)
Join condition: None

(9) Project [codegen id : 2]
Output    : [id#19L, k#22L]
Input     : [id#19L, k#20L, k#22L]
```
**After**
```
|== Physical Plan ==
* Project (9)
+- * BroadcastHashJoin Inner BuildRight (8)
   :- * ColumnarToRow (2)
   :  +- Scan parquet default.df1 (1)
   +- BroadcastExchange (7)
      +- * Project (6)
         +- * Filter (5)
            +- * ColumnarToRow (4)
               +- Scan parquet default.df2 (3)

(1) Scan parquet default.df1
Output: [id#19L, k#20L]

(2) ColumnarToRow [codegen id : 2]
Input: [id#19L, k#20L]

(3) Scan parquet default.df2
Output: [id#21L, k#22L]

(4) ColumnarToRow [codegen id : 1]
Input: [id#21L, k#22L]

(5) Filter [codegen id : 1]
Input     : [id#21L, k#22L]
Condition : (isnotnull(id#21L) AND (id#21L < 2))

(6) Project [codegen id : 1]
Output    : [k#22L]
Input     : [id#21L, k#22L]

(7) BroadcastExchange
Input: [k#22L]

(8) BroadcastHashJoin [codegen id : 2]
Left keys: List(k#20L)
Right keys: List(k#22L)
Join condition: None

(9) Project [codegen id : 2]
Output    : [id#19L, k#22L]
Input     : [id#19L, k#20L, k#22L]

===== Subqueries =====

Subquery:1 Hosting operator id = 1 Hosting Expression = k#20L IN subquery25
* HashAggregate (16)
+- Exchange (15)
   +- * HashAggregate (14)
      +- * Project (13)
         +- * Filter (12)
            +- * ColumnarToRow (11)
               +- Scan parquet default.df2 (10)

(10) Scan parquet default.df2
Output: [id#21L, k#22L]

(11) ColumnarToRow [codegen id : 1]
Input: [id#21L, k#22L]

(12) Filter [codegen id : 1]
Input     : [id#21L, k#22L]
Condition : (isnotnull(id#21L) AND (id#21L < 2))

(13) Project [codegen id : 1]
Output    : [k#22L]
Input     : [id#21L, k#22L]

(14) HashAggregate [codegen id : 1]
Input: [k#22L]

(15) Exchange
Input: [k#22L]

(16) HashAggregate [codegen id : 2]
Input: [k#22L]
```
### Why are the changes needed?
Without the fix, the subqueries are not printed in the explain plan.

### Does this PR introduce any user-facing change?
Yes. the explain output will be different.

### How was this patch tested?
Added a test case in ExplainSuite.

Closes #26039 from dilipbiswal/explain_subquery_issue.

Authored-by: Dilip Biswal <dkbiswal@gmail.com>
Signed-off-by: Xiao Li <gatorsmile@gmail.com>
2019-10-07 23:39:05 -07:00
Wenchen Fan 948a6e80fe [SPARK-28892][SQL][FOLLOWUP] add resolved logical plan for UPDATE TABLE
### What changes were proposed in this pull request?

Add back the resolved logical plan for UPDATE TABLE. It was in https://github.com/apache/spark/pull/25626 before but was removed later.

### Why are the changes needed?

In https://github.com/apache/spark/pull/25626 , we decided to not add the update API in DS v2, but we still want to implement UPDATE for builtin source like JDBC. We should at least add the resolved logical plan.

### Does this PR introduce any user-facing change?

no, UPDATE is still not supported yet.

### How was this patch tested?

new tests.

Closes #26025 from cloud-fan/update.

Authored-by: Wenchen Fan <wenchen@databricks.com>
Signed-off-by: Xiao Li <gatorsmile@gmail.com>
2019-10-07 23:36:26 -07:00
Huaxin Gao ffddfc8584 [SPARK-29269][PYTHON][ML] Pyspark ALSModel support getters/setters
### What changes were proposed in this pull request?
Add getters/setters in Pyspark ALSModel.

### Why are the changes needed?
To keep parity between python and scala.

### Does this PR introduce any user-facing change?
Yes.
add the following getters/setters to ALSModel
```
get/setUserCol
get/setItemCol
get/setColdStartStrategy
get/setPredictionCol
```

### How was this patch tested?
add doctest

Closes #25947 from huaxingao/spark-29269.

Authored-by: Huaxin Gao <huaxing@us.ibm.com>
Signed-off-by: zhengruifeng <ruifengz@foxmail.com>
2019-10-08 14:05:09 +08:00
gengjiaan 7d80aa553a [MINOR][BUILD] Fix an incorrect path in license file
### What changes were proposed in this pull request?
The `LICENSE` file exists a minor issue has an incorrect path.
This PR will fix it.

### Why are the changes needed?
This is a minor bug.

### Does this PR introduce any user-facing change?
No

### How was this patch tested?
Exists UT.

Closes #26050 from beliefer/resolve-minor-license-issue.

Authored-by: gengjiaan <gengjiaan@360.cn>
Signed-off-by: HyukjinKwon <gurwls223@apache.org>
2019-10-08 14:33:03 +09:00