Commit graph

30856 commits

Author SHA1 Message Date
Erik Krogen 70a15868fc [SPARK-35259][SHUFFLE] Update ExternalBlockHandler Timer variables to expose correct units
### What changes were proposed in this pull request?
`ExternalBlockHandler` exposes 4 metrics which are Dropwizard `Timer` metrics, and are named with a `millis` suffix:
```
    private final Timer openBlockRequestLatencyMillis = new Timer();
    private final Timer registerExecutorRequestLatencyMillis = new Timer();
    private final Timer fetchMergedBlocksMetaLatencyMillis = new Timer();
    private final Timer finalizeShuffleMergeLatencyMillis = new Timer();
```
However these Dropwizard Timers by default use nanoseconds ([documentation](https://metrics.dropwizard.io/3.2.3/getting-started.html#timers)).

This causes `YarnShuffleServiceMetrics` to expose confusingly-named metrics like `openBlockRequestLatencyMillis_nanos_max` (the actual values are currently in nanos).

This PR adds a new `Timer` subclass, `TimerWithCustomTimeUnit`, which accepts a `TimeUnit` at creation time and exposes timing information using this time unit when values are read. Internally, values are still stored with nanosecond-level precision. The `Timer` metrics within `ExternalBlockHandler` are updated to use the new class with milliseconds as the unit. The logic to include the `nanos` suffix in the metric name within `YarnShuffleServiceMetrics` has also been removed, with the assumption that the metric name itself includes the units.

### Does this PR introduce _any_ user-facing change?
Yes, there are two changes.
First, the names for metrics exposed by `ExternalBlockHandler` via `YarnShuffleServiceMetrics` such as `openBlockRequestLatencyMillis_nanos_max` and `openBlockRequestLatencyMillis_nanos_50thPercentile` have been changed to remove the `_nanos` suffix. This would be considered a breaking change, but these names were only exposed as part of #32388, which has not yet been released (slated for 3.2.0). New names are like `openBlockRequestLatencyMillis_max` and `openBlockRequestLatencyMillis_50thPercentile`
Second, the values of the metrics themselves have changed, to expose milliseconds instead of nanoseconds. Note that this does not affect metrics such as `openBlockRequestLatencyMillis_count` or `openBlockRequestLatencyMillis_rate1`, only the `Snapshot`-related metrics (`max`, `median`, percentiles, etc.). For the YARN case, these metrics were also introduced by #32388, and thus also have not yet been released. It was possible for the nanosecond values to be consumed by some other metrics reporter reading the Dropwizard metrics directly, but I'm not aware of any such usages.

### How was this patch tested?
Unit tests have been updated.

Closes #33116 from xkrogen/xkrogen-SPARK-35259-ess-fix-metric-unit-prefix.

Authored-by: Erik Krogen <xkrogen@apache.org>
Signed-off-by: yi.wu <yi.wu@databricks.com>
2021-07-24 21:26:18 +08:00
Chandni Singh 09e1c61272 [SPARK-36255][SHUFFLE][CORE] Stop pushing and retrying on FileNotFound exceptions
### What changes were proposed in this pull request?
Once the shuffle is cleaned up by the `ContextCleaner`, the shuffle files are deleted by the executors. In this case, the push of the shuffle data by the executors can throw `FileNotFoundException`s because the shuffle files are deleted. When this exception is thrown from the `shuffle-block-push-thread`, it causes the executor to exit. Both the `shuffle-block-push` threads and the netty event-loops will encounter `FileNotFoundException`s in this case.  The fix here stops these threads from pushing more blocks when they encounter `FileNotFoundException`. When the exception is from the `shuffle-block-push-thread`, it will get handled and logged as warning instead of failing the executor.

### Why are the changes needed?
This fixes the bug which causes executor to exits when they are instructed to clean up shuffle data.
Below is the stacktrace of this exception:
```
21/06/17 16:03:57 ERROR util.SparkUncaughtExceptionHandler: Uncaught exception in thread Thread[block-push-thread-1,5,main]
java.lang.Error: java.io.IOException: Error in opening FileSegmentManagedBuffer

{file=********/application_1619720975011_11057757/blockmgr-560cb4cf-9918-4ea7-a007-a16c5e3a35fe/0a/shuffle_1_690_0.data, offset=10640, length=190}
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1155)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: Error in opening FileSegmentManagedBuffer\{file=*******/application_1619720975011_11057757/blockmgr-560cb4cf-9918-4ea7-a007-a16c5e3a35fe/0a/shuffle_1_690_0.data, offset=10640, length=190}

at org.apache.spark.network.buffer.FileSegmentManagedBuffer.nioByteBuffer(FileSegmentManagedBuffer.java:89)
at org.apache.spark.shuffle.ShuffleWriter.sliceReqBufferIntoBlockBuffers(ShuffleWriter.scala:294)
at org.apache.spark.shuffle.ShuffleWriter.org$apache$spark$shuffle$ShuffleWriter$$sendRequest(ShuffleWriter.scala:270)
at org.apache.spark.shuffle.ShuffleWriter.org$apache$spark$shuffle$ShuffleWriter$$pushUpToMax(ShuffleWriter.scala:191)
at org.apache.spark.shuffle.ShuffleWriter$$anon$2$$anon$4.run(ShuffleWriter.scala:244)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
... 2 more
Caused by: java.io.FileNotFoundException: ******/application_1619720975011_11057757/blockmgr-560cb4cf-9918-4ea7-a007-a16c5e3a35fe/0a/shuffle_1_690_0.data (No such file or directory)
at java.io.RandomAccessFile.open0(Native Method)
at java.io.RandomAccessFile.open(RandomAccessFile.java:316)
at java.io.RandomAccessFile.<init>(RandomAccessFile.java:243)
at org.apache.spark.network.buffer.FileSegmentManagedBuffer.nioByteBuffer(FileSegmentManagedBuffer.java:62)
```

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
Added a unit to verify no more data is pushed when `FileNotFoundException` is encountered. Have also verified in our environment.

Closes #33477 from otterc/SPARK-36255.

Authored-by: Chandni Singh <singh.chandni@gmail.com>
Signed-off-by: yi.wu <yi.wu@databricks.com>
2021-07-24 21:09:11 +08:00
Dominik Gehl ae1c20ee0d [SPARK-36225][PYTHON][DOCS] Use DataFrame in python docstrings
### What changes were proposed in this pull request?
Changing references to Dataset in python docstrings to DataFrame

### Why are the changes needed?
no Dataset class in pyspark

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
Doc change only

Closes #33438 from dominikgehl/feature/SPARK-36225.

Lead-authored-by: Dominik Gehl <dog@open.ch>
Co-authored-by: Dominik Gehl <gehl@fastmail.fm>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
2021-07-24 16:58:10 +09:00
Takuya UESHIN 663cbdfbe5 [SPARK-36279][INFRA][PYTHON] Fix lint-python to work with Python 3.9
### What changes were proposed in this pull request?

Fix `lint-python` to pick `PYTHON_EXECUTABLE` from the environment variable first to switch the Python and explicitly specify `PYTHON_EXECUTABLE` to use `python3.9` in CI.

### Why are the changes needed?

Currently `lint-python` uses `python3`, but it's not the one we expect in CI.
As a result, `black` check is not working.

```
The python3 -m black command was not found. Skipping black checks for now.
```

### Does this PR introduce _any_ user-facing change?

No.

### How was this patch tested?

The `black` check in `lint-python` should work.

Closes #33507 from ueshin/issues/SPARK-36279/lint-python.

Authored-by: Takuya UESHIN <ueshin@databricks.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
2021-07-24 16:49:11 +09:00
Xinrong Meng 85adc2ff60 [SPARK-36274][PYTHON] Fix equality comparison of unordered Categoricals
### What changes were proposed in this pull request?
Fix equality comparison of unordered Categoricals.

### Why are the changes needed?
Codes of a Categorical Series are used for Series equality comparison. However, that doesn't apply to unordered Categoricals, where the same value can have different codes in two same categories in a different order.

So we should map codes to value respectively and then compare the equality of value.

### Does this PR introduce _any_ user-facing change?
Yes.
From:
```py
>>> psser1 = ps.Series(pd.Categorical(list("abca")))
>>> psser2 = ps.Series(pd.Categorical(list("bcaa"), categories=list("bca")))
>>> with ps.option_context("compute.ops_on_diff_frames", True):
...     (psser1 == psser2).sort_index()
...
0     True
1     True
2     True
3    False
dtype: bool
```

To:
```py
>>> psser1 = ps.Series(pd.Categorical(list("abca")))
>>> psser2 = ps.Series(pd.Categorical(list("bcaa"), categories=list("bca")))
>>> with ps.option_context("compute.ops_on_diff_frames", True):
...     (psser1 == psser2).sort_index()
...
0    False
1    False
2    False
3     True
dtype: bool
```

### How was this patch tested?
Unit tests.

Closes #33497 from xinrong-databricks/cat_bug.

Authored-by: Xinrong Meng <xinrong.meng@databricks.com>
Signed-off-by: Takuya UESHIN <ueshin@databricks.com>
2021-07-23 18:30:59 -07:00
Takuya UESHIN e12bc4d31d [SPARK-36264][PYTHON] Add reorder_categories to CategoricalAccessor and CategoricalIndex
### What changes were proposed in this pull request?

Add `reorder_categories` to `CategoricalAccessor` and `CategoricalIndex`.

### Why are the changes needed?

We should implement `reorder_categories` in `CategoricalAccessor` and `CategoricalIndex`.

### Does this PR introduce _any_ user-facing change?

Yes, users will be able to use `reorder_categories`.

### How was this patch tested?

Added some tests.

Closes #33499 from ueshin/issues/SPARK-36264/reorder_categories.

Authored-by: Takuya UESHIN <ueshin@databricks.com>
Signed-off-by: Takuya UESHIN <ueshin@databricks.com>
2021-07-23 17:19:20 -07:00
Dongjoon Hyun bad7a929af [MINOR][INFRA] Add enabled_merge_buttons to .asf.yaml explicitly
### What changes were proposed in this pull request?

This PR aims to add the AS-IS `enabled_merge_buttons` policy explicitly. The AS-IS policy was introduced via  https://issues.apache.org/jira/browse/INFRA-18656.

### Why are the changes needed?

Currently, this policy is maintained as a self-serving manner. Here is the official documentation. It would be great if we have this explicitly for new comers.
- https://cwiki.apache.org/confluence/display/INFRA/git+-+.asf.yaml+features#Git.asf.yamlfeatures-Mergebuttons

### Does this PR introduce _any_ user-facing change?

No. This is a committer-only feature and there is no change in terms of the policy.

### How was this patch tested?

N/A

Closes #33505 from dongjoon-hyun/minor.

Authored-by: Dongjoon Hyun <dongjoon@apache.org>
Signed-off-by: Dongjoon Hyun <dongjoon@apache.org>
2021-07-23 15:29:44 -07:00
Holden Karau bee279997f [SPARK-35956][K8S] Support auto assigning labels to decommissioning pods
### What changes were proposed in this pull request?

Add a new configuration flag to allow Spark to provide hints to the scheduler when we are decommissioning or exiting a pod that this pod will have the least impact for a pre-emption event.

### Why are the changes needed?

Kubernetes added the concepts of pod disruption budgets (which can have selectors based on labels) as well pod deletion for providing hints to the scheduler as to what we would prefer to have pre-empted.

### Does this PR introduce _any_ user-facing change?

New configuration flag

### How was this patch tested?

The deletion unit test was extended.

Closes #33270 from holdenk/SPARK-35956-support-auto-assigning-labels-to-decommissioning-pods.

Lead-authored-by: Holden Karau <hkarau@netflix.com>
Co-authored-by: Holden Karau <holden@pigscanfly.ca>
Co-authored-by: Holden Karau <hkarau@apple.com>
Signed-off-by: Holden Karau <hkarau@netflix.com>
2021-07-23 15:21:38 -07:00
Liang-Chi Hsieh c2de111ec5 [SPARK-36270][BUILD][FOLLOWUP] Reduce metaspace size for pyspark
### What changes were proposed in this pull request?

Notice that pyspark GA module `pyspark-pandas-slow` sometimes still has return code 137. Try to reduce its metaspace size further.

### Why are the changes needed?

Fix return code 137 for pyspark GA module.

### Does this PR introduce _any_ user-facing change?

No, dev only.

### How was this patch tested?

GA

Closes #33496 from viirya/test-ga-followup.

Authored-by: Liang-Chi Hsieh <viirya@gmail.com>
Signed-off-by: Dongjoon Hyun <dongjoon@apache.org>
2021-07-23 14:20:00 -07:00
William Hyun 32f3e217f2 [SPARK-36276][BUILD][TESTS] Update maven-checkstyle-plugin to 3.1.2 and checkstyle to 8.43
### What changes were proposed in this pull request?
This PR aims to update maven-checkstyle-plugin to 3.1.2 and checkstyle to 8.43.
### Why are the changes needed?
This will bring the latest bug fixes and improvements from 8.40 to 8.43.
- https://checkstyle.sourceforge.io/releasenotes.html#Release_8.43

Note that 8.44 has a false-positive bug for ArrayType checker.

### Does this PR introduce _any_ user-facing change?
No.

### How was this patch tested?
Pass the GHA.

Closes #33500 from williamhyun/SPARK-36276.

Authored-by: William Hyun <william@apache.org>
Signed-off-by: Dongjoon Hyun <dongjoon@apache.org>
2021-07-23 14:18:11 -07:00
Almog Tavor 530c8addbb [SPARK-36273][SHUFFLE] Fix identical values comparison
This commit fixes the use of the "o.appAttemptId" variable instead of the mistaken "appAttemptId" variable. The current situation is a comparison of identical values. Jira issue report SPARK-36273.

### What changes were proposed in this pull request?
This is a patch for SPARK-35546 which is needed for push-based shuffle.

### Why are the changes needed?
A very minor fix of adding the reference from the other "FinalizeShuffleMerge".

### Does this PR introduce _any_ user-facing change?
No.

### How was this patch tested?
No unit tests were added. It's a pretty logical change.

Closes #33493 from almogtavor/patch-1.

Authored-by: Almog Tavor <70065337+almogtavor@users.noreply.github.com>
Signed-off-by: Sean Owen <srowen@gmail.com>
2021-07-23 12:46:31 -05:00
dgd-contributor fc29c91f27 [SPARK-35561][SQL] Remove leading zeros from empty static number type partition
### What changes were proposed in this pull request?

This PR removes leading zeros from static number type partition when we insert into a partition table with empty partitions.

create table

    CREATE TABLE `table_int` ( `id` INT, `c_string` STRING, `p_int` int)
    USING parquet PARTITIONED BY (p_int);

insert

    insert overwrite table table_int partition (p_int='00011')
    select 1, 'c string'
    where true ;

|partition|
|---------|
|p_int=11|

    insert overwrite table table_int partition (p_int='00012')
    select 1, 'c string'
    where false ;

|partition|
|---------|
|p_int=00012|

### Why are the changes needed?

This PR creates consistent result when insert empty or non-empty partition

### Does this PR introduce _any_ user-facing change?

No

### How was this patch tested?

Add Unit test

Closes #33291 from dgd-contributor/35561_insert_integer_partition_fail_when_empty.

Authored-by: dgd-contributor <dgd_contributor@viettel.com.vn>
Signed-off-by: Sean Owen <srowen@gmail.com>
2021-07-23 12:41:36 -05:00
yangjie01 f61d5993ea [SPARK-36242][CORE] Ensure spill file closed before set success = true in ExternalSorter.spillMemoryIteratorToDisk method
### What changes were proposed in this pull request?
The main change of this pr is move `writer.close()` before `success = true` to ensure spill file closed before set `success = true` in `ExternalSorter.spillMemoryIteratorToDisk` method.

### Why are the changes needed?
Avoid setting `success = true` first and then failure of close spill file

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?

- Pass the Jenkins or GitHub Action
- Add a new Test case to check `The spill file should not exists if writer close fails`

Closes #33460 from LuciferYang/external-sorter-spill-close.

Authored-by: yangjie01 <yangjie01@baidu.com>
Signed-off-by: yi.wu <yi.wu@databricks.com>
2021-07-23 23:15:13 +08:00
Angerszhuuuu 3ff8c9f9d6 [SPARK-34399][SQL] Add commit duration to SQL tab's graph node
### What changes were proposed in this pull request?
Since we have add log about commit time, I think this useful and we can make user know it directly in SQL tab's UI.

![image](https://user-images.githubusercontent.com/46485123/126647754-dc3ba83a-5391-427c-8a67-e6af46e82290.png)

### Why are the changes needed?
Make user can directly know commit duration.

### Does this PR introduce _any_ user-facing change?
User can see file commit duration in SQL tab's SQL plan graph

### How was this patch tested?
Mannul tested

Closes #31522 from AngersZhuuuu/SPARK-34399.

Lead-authored-by: Angerszhuuuu <angers.zhu@gmail.com>
Co-authored-by: AngersZhuuuu <angers.zhu@gmail.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
2021-07-23 11:06:07 +00:00
Dominik Gehl 701756ac95 [SPARK-36226][PYTHON][DOCS] Improve python docstring links to other classes
### What changes were proposed in this pull request?
additional links to other classes in python documentation

### Why are the changes needed?
python docstring syntax wasn't fully used everywhere

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
Documentation change only

Closes #33440 from dominikgehl/feature/python-docstrings.

Authored-by: Dominik Gehl <dog@open.ch>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
2021-07-23 19:17:51 +09:00
Dominik Gehl 382fe44b55 [SPARK-36258][PYTHON] Exposing functionExists in pyspark sql catalog
### What changes were proposed in this pull request?
Exposing functionExists in pyspark sql catalog

### Why are the changes needed?
method was available in scala but not pyspark

### Does this PR introduce _any_ user-facing change?
Additional method

### How was this patch tested?
Unit tests

Closes #33481 from dominikgehl/SPARK-36258.

Authored-by: Dominik Gehl <dog@open.ch>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
2021-07-23 19:15:41 +09:00
Liang-Chi Hsieh fd36ed4550 [SPARK-36270][BUILD] Change memory settings for enabling GA
### What changes were proposed in this pull request?

Trying to adjust build memory settings and serial execution to re-enable GA.

### Why are the changes needed?

GA tests are failed recently due to return code 137. We need to adjust build settings to make GA work.

### Does this PR introduce _any_ user-facing change?

No, dev only.

### How was this patch tested?

GA

Closes #33447 from viirya/test-ga.

Lead-authored-by: Liang-Chi Hsieh <viirya@gmail.com>
Co-authored-by: Hyukjin Kwon <gurwls223@gmail.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
2021-07-23 19:10:45 +09:00
Takuya UESHIN 2fe12a7520 [SPARK-36261][PYTHON] Add remove_unused_categories to CategoricalAccessor and CategoricalIndex
### What changes were proposed in this pull request?

Add `remove_unused_categories` to `CategoricalAccessor` and `CategoricalIndex`.

### Why are the changes needed?

We should implement `remove_unused_categories` in `CategoricalAccessor` and `CategoricalIndex`.

### Does this PR introduce _any_ user-facing change?

Yes, users will be able to use `remove_unused_categories`.

### How was this patch tested?

Added some tests.

Closes #33485 from ueshin/issues/SPARK-36261/remove_unused_categories.

Authored-by: Takuya UESHIN <ueshin@databricks.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
2021-07-23 14:04:59 +09:00
Hyukjin Kwon d6bc8cd681 [SPARK-36268][PYTHON] Set the lowerbound of mypy version to 0.910
### What changes were proposed in this pull request?

This PR proposes to set the lowerbound of mypy version to use in the testing script.

### Why are the changes needed?

https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/141519/console

```
python/pyspark/mllib/tree.pyi:29: error: Overloaded function signatures 1 and 2 overlap with incompatible return types
python/pyspark/mllib/tree.pyi:38: error: Overloaded function signatures 1 and 2 overlap with incompatible return types
python/pyspark/mllib/feature.pyi:34: error: Overloaded function signatures 1 and 2 overlap with incompatible return types
python/pyspark/mllib/feature.pyi:42: error: Overloaded function signatures 1 and 2 overlap with incompatible return types
python/pyspark/mllib/feature.pyi:48: error: Overloaded function signatures 1 and 2 overlap with incompatible return types
python/pyspark/mllib/feature.pyi:54: error: Overloaded function signatures 1 and 2 overlap with incompatible return types
python/pyspark/mllib/feature.pyi:76: error: Overloaded function signatures 1 and 2 overlap with incompatible return types
python/pyspark/mllib/feature.pyi:124: error: Overloaded function signatures 1 and 2 overlap with incompatible return types
python/pyspark/mllib/feature.pyi:165: error: Overloaded function signatures 1 and 2 overlap with incompatible return types
python/pyspark/mllib/clustering.pyi:45: error: Overloaded function signatures 1 and 2 overlap with incompatible return types
python/pyspark/mllib/clustering.pyi:72: error: Overloaded function signatures 1 and 2 overlap with incompatible return types
python/pyspark/mllib/classification.pyi:39: error: Overloaded function signatures 1 and 2 overlap with incompatible return types
python/pyspark/mllib/classification.pyi:52: error: Overloaded function signatures 1 and 2 overlap with incompatible return types
Found 13 errors in 4 files (checked 314 source files)
1
```

Jenkins installed mypy at SPARK-32797 but seems the version installed is not same as GIthub Actions.

It seems difficult to make the codebase compatible with multiple mypy versions. Therefore, this PR sets the lowerbound.

### Does this PR introduce _any_ user-facing change?

No, dev-only.

### How was this patch tested?

Jenkins job in this PR should test it out.

Also manually tested:

Without mypy:

```
...
flake8 checks passed.

The mypy command was not found. Skipping for now.
```

With mypy 0.812:

```
...
flake8 checks passed.

The minimum mypy version needs to be 0.910. Your current version is mypy 0.812. Skipping for now.
```

With mypy 0.910:

```
...
flake8 checks passed.

starting mypy test...
mypy checks passed.

all lint-python tests passed!
```

Closes #33487 from HyukjinKwon/SPARK-36268.

Authored-by: Hyukjin Kwon <gurwls223@apache.org>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
2021-07-23 12:28:16 +09:00
Xinrong Meng 8b3d84bb7e [SPARK-36248][PYTHON] Add rename_categories to CategoricalAccessor and CategoricalIndex
### What changes were proposed in this pull request?
Add rename_categories to CategoricalAccessor and CategoricalIndex.

### Why are the changes needed?
rename_categories is supported in pandas CategoricalAccessor and CategoricalIndex. We ought to follow pandas.

### Does this PR introduce _any_ user-facing change?
Yes. `rename_categories` is supported in pandas API on Spark now.

```py
# CategoricalIndex
>>> psser = ps.CategoricalIndex(["a", "a", "b"])
>>> psser.rename_categories([0, 1])
CategoricalIndex([0, 0, 1], categories=[0, 1], ordered=False, dtype='category')
>>> psser.rename_categories({'a': 'A', 'c': 'C'})
CategoricalIndex(['A', 'A', 'b'], categories=['A', 'b'], ordered=False, dtype='category')
>>> psser.rename_categories(lambda x: x.upper())
CategoricalIndex(['A', 'A', 'B'], categories=['A', 'B'], ordered=False, dtype='category')

# CategoricalAccessor
>>> s = ps.Series(["a", "a", "b"], dtype="category")
>>> s.cat.rename_categories([0, 1])
0    0
1    0
2    1
dtype: category
Categories (2, int64): [0, 1]
>>> s.cat.rename_categories({'a': 'A', 'c': 'C'})
0    A
1    A
2    b
dtype: category
Categories (2, object): ['A', 'b']
>>> s.cat.rename_categories(lambda x: x.upper())
0    A
1    A
2    B
dtype: category
Categories (2, object): ['A', 'B']
```

### How was this patch tested?
Unit tests.

Closes #33471 from xinrong-databricks/category_rename_categories.

Authored-by: Xinrong Meng <xinrong.meng@databricks.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
2021-07-23 12:26:24 +09:00
Xinrong Meng 75fd1f5b82 [SPARK-36189][PYTHON] Improve bool, string, numeric DataTypeOps tests by avoiding joins
### What changes were proposed in this pull request?
Improve bool, string, numeric DataTypeOps tests by avoiding joins.

Previously, bool, string, numeric DataTypeOps tests are conducted between two different Series.
After the PR, bool, string, numeric DataTypeOps tests should perform on a single DataFrame.

### Why are the changes needed?
A considerable number of DataTypeOps tests have operations on different Series, so joining is needed, which takes a long time.
We shall avoid joins for a shorter test duration.

The majority of joins happen in bool, string, numeric DataTypeOps tests, so we improve them first.

### Does this PR introduce _any_ user-facing change?
No.

### How was this patch tested?
Unit tests.

Closes #33402 from xinrong-databricks/datatypeops_diffframe.

Authored-by: Xinrong Meng <xinrong.meng@databricks.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
2021-07-23 12:20:35 +09:00
Takuya UESHIN a76a087f7f [SPARK-36265][PYTHON] Use __getitem__ instead of getItem to suppress warnings
### What changes were proposed in this pull request?

Use `Column.__getitem__` instead of `Column.getItem` to suppress warnings.

### Why are the changes needed?

In pandas API on Spark code base, there are some places using `Column.getItem` with `Column` object, but it shows a deprecation warning.

### Does this PR introduce _any_ user-facing change?

Yes, users won't see the warnings anymore.

- before

```py
>>> s = ps.Series(list("abbccc"), dtype="category")
>>> s.astype(str)
/path/to/spark/python/pyspark/sql/column.py:322: FutureWarning: A column as 'key' in getItem is deprecated as of Spark 3.0, and will not be supported in the future release. Use `column[key]` or `column.key` syntax instead.
  warnings.warn(
0    a
1    b
2    b
3    c
4    c
5    c
dtype: object
```

- after

```py
>>> s = ps.Series(list("abbccc"), dtype="category")
>>> s.astype(str)
0    a
1    b
2    b
3    c
4    c
5    c
dtype: object
```

### How was this patch tested?

Existing tests.

Closes #33486 from ueshin/issues/SPARK-36265/getitem.

Authored-by: Takuya UESHIN <ueshin@databricks.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
2021-07-23 11:27:31 +09:00
Dongjoon Hyun a1a197403b [SPARK-36262][BUILD] Upgrade ZSTD-JNI to 1.5.0-4
### What changes were proposed in this pull request?

This PR aims to upgrade ZSTD-JNI to 1.5.0-4.

### Why are the changes needed?

ZSTD-JNI 1.5.0-3 has a packaging issue. 1.5.0-4 is recommended to be used instead.
- https://github.com/luben/zstd-jni/issues/181#issuecomment-885138495

### Does this PR introduce _any_ user-facing change?

No.

### How was this patch tested?

Pass the CIs.

Closes #33483 from dongjoon-hyun/SPARK-36262.

Authored-by: Dongjoon Hyun <dongjoon@apache.org>
Signed-off-by: Dongjoon Hyun <dongjoon@apache.org>
2021-07-22 14:03:59 -07:00
Sean Owen b69c26833c [SPARK-35848][MLLIB] Optimize some treeAggregates in MLlib by delaying allocations
### What changes were proposed in this pull request?

Optimize some treeAggregates in MLlib by delaying allocating (thus not sending around) large arrays of zeroes
This uses the same idea as in https://github.com/apache/spark/pull/23600/files

### Why are the changes needed?

Allocating huge arrays of zeroes takes additional memory and network I/O which is unnecessary in some cases. It can cause operations to run out of memory that might otherwise succeed. Specifically, this should prevent the 'zero' value from having to be (pointlessly) checked for serializability, which can fail when passing through the default JavaSerializer; it would also prevent allocating and sending large 'zero' values for an empty partition in the aggregate.

### Does this PR introduce _any_ user-facing change?

No.

### How was this patch tested?

Existing tests.

Closes #33443 from srowen/SPARK-35848.

Authored-by: Sean Owen <srowen@gmail.com>
Signed-off-by: Sean Owen <srowen@gmail.com>
2021-07-22 13:59:09 -05:00
Sean Owen 518f00fd78 [SPARK-35310][MLLIB] Update to breeze 1.2
### What changes were proposed in this pull request?

Update to the latest breeze 1.2

### Why are the changes needed?

Minor bug fixes

### Does this PR introduce _any_ user-facing change?

No.

### How was this patch tested?

Existing tests

Closes #33449 from srowen/SPARK-35310.

Authored-by: Sean Owen <srowen@gmail.com>
Signed-off-by: Sean Owen <srowen@gmail.com>
2021-07-22 13:58:01 -05:00
Kousuke Saruta 07fa38e2c1 [SPARK-35815][SQL] Allow delayThreshold for watermark to be represented as ANSI interval literals
### What changes were proposed in this pull request?

This PR extends the way to represent `delayThreshold` with ANSI interval literals for watermark.

### Why are the changes needed?

A `delayThreshold` is semantically an interval value so it's should be represented as ANSI interval literals as well as the conventional `1 second` form.

### Does this PR introduce _any_ user-facing change?

No.

### How was this patch tested?

New tests.

Closes #33456 from sarutak/delayThreshold-interval.

Authored-by: Kousuke Saruta <sarutak@oss.nttdata.com>
Signed-off-by: Max Gekk <max.gekk@gmail.com>
2021-07-22 17:36:22 +03:00
Dominik Gehl 3a1db2ddd4 [SPARK-36209][PYTHON][DOCS] Fix link to pyspark Dataframe documentation
### What changes were proposed in this pull request?
Bugfix: link to correction location of Pyspark Dataframe documentation

### Why are the changes needed?
Current website returns "Not found"

### Does this PR introduce _any_ user-facing change?
Website fix

### How was this patch tested?
Documentation change

Closes #33420 from dominikgehl/feature/SPARK-36209.

Authored-by: Dominik Gehl <dog@open.ch>
Signed-off-by: Sean Owen <srowen@gmail.com>
2021-07-22 08:07:00 -05:00
Gengliang Wang ae9f6126fb [SPARK-36257][SQL] Updated the version of TimestampNTZ related changes as 3.3.0
### What changes were proposed in this pull request?

As we decided to release TimestampNTZ type in Spark 3.3, we should update the versions of TimestampNTZ related changes as 3.3.0.

### Why are the changes needed?

Correct the versions in documentation/code comment.

### Does this PR introduce _any_ user-facing change?

No

### How was this patch tested?

Existing UT

Closes #33478 from gengliangwang/updateVersion.

Authored-by: Gengliang Wang <gengliang@apache.org>
Signed-off-by: Gengliang Wang <gengliang@apache.org>
2021-07-22 21:01:29 +08:00
Kousuke Saruta 13aefd6a66 [SPARK-36256][BUILD] Upgrade lz4-java to 1.8.0
### What changes were proposed in this pull request?

This PR upgrades `lz4-java` to `1.8.0`, which includes not only performance improvement  but also Darwin aarch64 support.
https://github.com/lz4/lz4-java/releases/tag/1.8.0
https://github.com/lz4/lz4-java/blob/1.8.0/CHANGES.md

### Why are the changes needed?

For providing better performance and platform support.

### Does this PR introduce _any_ user-facing change?

No.

### How was this patch tested?

CI.

Closes #33476 from sarutak/upgrade-lz4-java-1.8.0.

Authored-by: Kousuke Saruta <sarutak@oss.nttdata.com>
Signed-off-by: Gengliang Wang <gengliang@apache.org>
2021-07-22 20:39:59 +08:00
itholic 86471ad668 [SPARK-36239][PYTHON][DOCS] Remove some APIs from documentation
### What changes were proposed in this pull request?

This PR proposes removing some APIs from pandas-on-Spark documentation.

Because they can be easily workaround via Spark DataFrame or Column functions, so they might be removed In the future.

### Why are the changes needed?

Because we don't want to expose some functions as a public API.

### Does this PR introduce _any_ user-facing change?

The APIs such as `(Series|Index).spark.data_type`, `(Series|Index).spark.nullable`, `DataFrame.spark.schema`, `DataFrame.spark.print_schema`, `DataFrame.pandas_on_spark.attach_id_column`, `DataFrame.spark.checkpoint`, `DataFrame.spark.localcheckpoint` and `DataFrame.spark.explain` is removed in the documentation.

### How was this patch tested?

Manually build the documents.

Closes #33458 from itholic/SPARK-36239.

Authored-by: itholic <haejoon.lee@databricks.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
2021-07-22 19:46:40 +09:00
gengjiaan 900b72a9cd [SPARK-35088][SQL][FOLLOWUP] Add test case for TimestampNTZ sequence with default step
### What changes were proposed in this pull request?
This PR follows up https://github.com/apache/spark/pull/33360 and add test case for `TimestampNTZ` sequence with default step.

### Why are the changes needed?
Improve test coverage.

### Does this PR introduce _any_ user-facing change?
'No'.
Just add test cases.

### How was this patch tested?
New tests.

Closes #33462 from beliefer/SPARK-36090-followup.

Authored-by: gengjiaan <gengjiaan@360.cn>
Signed-off-by: Gengliang Wang <gengliang@apache.org>
2021-07-22 17:53:22 +08:00
Angerszhuuuu bb09bd2e2d [SPARK-36156][SQL] SCRIPT TRANSFORM ROW FORMAT DELIMITED should respect NULL DEFINED AS and default value should be \N
### What changes were proposed in this pull request?
SCRIPT TRANSFORM ROW FORMAT DELIMITED should respect `NULL DEFINED AS` and default value should be `\N`
![image](https://user-images.githubusercontent.com/46485123/125775377-611d4f06-f9e5-453a-990d-5a0018774f43.png)
![image](https://user-images.githubusercontent.com/46485123/125775387-6618bd0c-78d8-4457-bcc2-12dd70522946.png)

### Why are the changes needed?
Keep consistence with Hive

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
Added UT

Closes #33363 from AngersZhuuuu/SPARK-36156.

Authored-by: Angerszhuuuu <angers.zhu@gmail.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
2021-07-22 17:28:37 +08:00
Dominik Gehl 2c35604044 [SPARK-36243][SQL][PYTHON][DOCS] Fixing pyspark tableExists issue with temporary views
### What changes were proposed in this pull request?
Additional tests for pyspark tableExists with regard to views and temporary views

### Why are the changes needed?
scala documentation indicates that tableExists works for tables/view and also temporary views. This unit tests try to verify that claim. While views seem ok, temporary views don't seem to work.

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
tests

Closes #33461 from dominikgehl/bug/SPARK-36243.

Lead-authored-by: Dominik Gehl <dog@open.ch>
Co-authored-by: Dominik Gehl <gehl@fastmail.fm>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
2021-07-22 18:12:01 +09:00
Enrico Minack 4e9c1b8ba0 [SPARK-34806][SQL] Add Observation helper for Dataset.observe
### What changes were proposed in this pull request?
This pull request introduces a helper class that simplifies usage of `Dataset.observe()` for batch datasets:

    val observation = Observation("name")
    val observed = ds.observe(observation, max($"id").as("max_id"))
    observed.count()
    val metrics = observation.get

### Why are the changes needed?
Currently, users are required to implement the `QueryExecutionListener` interface to retrieve the metrics, as well as apply some knowledge on threading and locking to pull the metrics over to the main thread. With the helper class, metrics can be retrieved from batch dataset processing with three lines of code (the action on the observed dataset does not count as a line of code here).

### Does this PR introduce _any_ user-facing change?
Yes, one new class and one `Dataset`` method.

### How was this patch tested?
Adds a unit test to `DataFrameSuite`, similar to `"get observable metrics by callback"` in `DataFrameCallbackSuite`.

Closes #33422 from EnricoMi/branch-observation.

Authored-by: Enrico Minack <github@enrico.minack.dev>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
2021-07-22 08:57:04 +00:00
itholic d1a037a27c [SPARK-35810][PYTHON][FOLLWUP] Deprecate ps.broadcast API
### What changes were proposed in this pull request?

This PR follows up #33379 to fix build error in Sphinx

### Why are the changes needed?

The Sphinx build is failed with missing newline in docstring

### Does this PR introduce _any_ user-facing change?

No

### How was this patch tested?

Manually test the Sphinx build

Closes #33479 from itholic/SPARK-35810-followup.

Authored-by: itholic <haejoon.lee@databricks.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
2021-07-22 17:10:03 +09:00
itholic 6578f0b135 [SPARK-35809][PYTHON] Add index_col argument for ps.sql
### What changes were proposed in this pull request?

This PR proposes adding an argument `index_col` for `ps.sql` function, to preserve the index when users want.

NOTE that the `reset_index()` have to be performed before using `ps.sql` with `index_col`.

```python
>>> psdf
   A  B
a  1  4
b  2  5
c  3  6
>>> psdf_reset_index = psdf.reset_index()
>>> ps.sql("SELECT * from {psdf_reset_index} WHERE A > 1", index_col="index")
       A  B
index
b      2  5
c      3  6
```

Otherwise, the index is always lost.

```python
>>> ps.sql("SELECT * from {psdf} WHERE A > 1")
   A  B
0  2  5
1  3  6
```

### Why are the changes needed?

Index is one of the key object for the existing pandas users, so we should provide the way to keep the index after computing the `ps.sql`.

### Does this PR introduce _any_ user-facing change?

Yes, the new argument is added.

### How was this patch tested?

Add a unit test and manually check the build pass.

Closes #33450 from itholic/SPARK-35809.

Authored-by: itholic <haejoon.lee@databricks.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
2021-07-22 17:08:34 +09:00
Takuya UESHIN a3c7ae18e2 [SPARK-36249][PYTHON] Add remove_categories to CategoricalAccessor and CategoricalIndex
### What changes were proposed in this pull request?

Add `remove_categories` to `CategoricalAccessor` and `CategoricalIndex`.

### Why are the changes needed?

We should implement `remove_categories` in `CategoricalAccessor` and `CategoricalIndex`.

### Does this PR introduce _any_ user-facing change?

Yes, users will be able to use `remove_categories`.

### How was this patch tested?

Added some tests.

Closes #33474 from ueshin/issues/SPARK-36249/remove_categories.

Authored-by: Takuya UESHIN <ueshin@databricks.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
2021-07-22 17:06:12 +09:00
Holden Karau 89a83196ac [SPARK-36246][CORE][TEST] GHA WorkerDecommissionExtended flake
### What changes were proposed in this pull request?

GHA probably doesn't have the same resources as jenkins so move down from 5 to 3 execs and give a bit more time for them to come up.

### Why are the changes needed?

Test is timing out in GHA

### Does this PR introduce _any_ user-facing change?
No, test only change.

### How was this patch tested?

Run through GHA verify no OOM during WorkerDecommissionExtended

Closes #33467 from holdenk/SPARK-36246-WorkerDecommissionExtendedSuite-flakes-in-GHA.

Lead-authored-by: Holden Karau <holden@pigscanfly.ca>
Co-authored-by: Holden Karau <hkarau@netflix.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
2021-07-22 15:17:48 +09:00
Takuya UESHIN dcc0aaa3ef [SPARK-36214][PYTHON] Add add_categories to CategoricalAccessor and CategoricalIndex
### What changes were proposed in this pull request?

Add `add_categories` to `CategoricalAccessor` and `CategoricalIndex`.

### Why are the changes needed?

We should implement `add_categories` in `CategoricalAccessor` and `CategoricalIndex`.

### Does this PR introduce _any_ user-facing change?

Yes, users will be able to use `add_categories`.

### How was this patch tested?

Added some tests.

Closes #33470 from ueshin/issues/SPARK-36214/add_categories.

Authored-by: Takuya UESHIN <ueshin@databricks.com>
Signed-off-by: Takuya UESHIN <ueshin@databricks.com>
2021-07-21 22:34:04 -07:00
Hyukjin Kwon f3e29574d9 [SPARK-36253][PYTHON][DOCS] Add versionadded to the top of pandas-on-Spark package
### What changes were proposed in this pull request?

This PR adds the version that added pandas API on Spark in PySpark documentation.

### Why are the changes needed?

To document the version added.

### Does this PR introduce _any_ user-facing change?

No to end user. Spark 3.2 is not released yet.

### How was this patch tested?

Linter and documentation build.

Closes #33473 from HyukjinKwon/SPARK-36253.

Authored-by: Hyukjin Kwon <gurwls223@apache.org>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
2021-07-22 14:21:43 +09:00
allisonwang-db de8e4be92c [SPARK-36063][SQL] Optimize OneRowRelation subqueries
### What changes were proposed in this pull request?
This PR adds optimization for scalar and lateral subqueries with OneRowRelation as leaf nodes. It inlines such subqueries before decorrelation to avoid rewriting them as left outer joins. It also introduces a flag to turn on/off this optimization: `spark.sql.optimizer.optimizeOneRowRelationSubquery` (default: True).

For example:
```sql
select (select c1) from t
```
Analyzed plan:
```
Project [scalar-subquery#17 [c1#18] AS scalarsubquery(c1)#22]
:  +- Project [outer(c1#18)]
:     +- OneRowRelation
+- LocalRelation [c1#18, c2#19]
```

Optimized plan before this PR:
```
Project [c1#18#25 AS scalarsubquery(c1)#22]
+- Join LeftOuter, (c1#24 <=> c1#18)
   :- LocalRelation [c1#18]
   +- Aggregate [c1#18], [c1#18 AS c1#18#25, c1#18 AS c1#24]
      +- LocalRelation [c1#18]
```

Optimized plan after this PR:
```
LocalRelation [scalarsubquery(c1)#22]
```

### Why are the changes needed?
To optimize query plans.

### Does this PR introduce _any_ user-facing change?
No.

### How was this patch tested?
Added new unit tests.

Closes #33284 from allisonwang-db/spark-36063-optimize-subquery-one-row-relation.

Authored-by: allisonwang-db <allison.wang@databricks.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
2021-07-22 10:48:32 +08:00
Kousuke Saruta dcb7db5370 [SPARK-36244][BUILD] Upgrade zstd-jni to 1.5.0-3 to avoid a bug about buffer size calculation
### What changes were proposed in this pull request?

This PR upgrades `zstd-jni` from `1.5.0-2` to `1.5.0-3`.
`1.5.0-3` was released few days ago.
This release resolves an issue about buffer size calculation, which can affect usage in Spark.
https://github.com/luben/zstd-jni/releases/tag/v1.5.0-3

### Why are the changes needed?

It might be a corner case that skipping length is greater than `2^31 - 1` but it's possible to affect Spark.

### Does this PR introduce _any_ user-facing change?

No.

### How was this patch tested?

CI.

Closes #33464 from sarutak/upgrade-zstd-jni-1.5.0-3.

Authored-by: Kousuke Saruta <sarutak@oss.nttdata.com>
Signed-off-by: Dongjoon Hyun <dongjoon@apache.org>
2021-07-21 19:37:05 -07:00
Fu Chen 09bebc8bde [SPARK-35912][SQL] Fix nullability of spark.read.json/spark.read.csv
### What changes were proposed in this pull request?

Rework [PR](https://github.com/apache/spark/pull/33212) with suggestions.

This PR make `spark.read.json()` has the same behavior with Datasource API `spark.read.format("json").load("path")`. Spark should turn a non-nullable schema into nullable when using API `spark.read.json()` by default.

Here is an example:

```scala
  val schema = StructType(Seq(StructField("value",
    StructType(Seq(
      StructField("x", IntegerType, nullable = false),
      StructField("y", IntegerType, nullable = false)
    )),
    nullable = true
  )))

  val testDS = Seq("""{"value":{"x":1}}""").toDS
  spark.read
    .schema(schema)
    .json(testDS)
    .printSchema()

  spark.read
    .schema(schema)
    .format("json")
    .load("/tmp/json/t1")
    .printSchema()
  // root
  //  |-- value: struct (nullable = true)
  //  |    |-- x: integer (nullable = true)
  //  |    |-- y: integer (nullable = true)
```

Before this pr:
```
// output of spark.read.json()
root
 |-- value: struct (nullable = true)
 |    |-- x: integer (nullable = false)
 |    |-- y: integer (nullable = false)
```

After this pr:
```
// output of spark.read.json()
root
 |-- value: struct (nullable = true)
 |    |-- x: integer (nullable = true)
 |    |-- y: integer (nullable = true)
```

- `spark.read.csv()` also has the same problem.
- Datasource API `spark.read.format("json").load("path")` do this logical when resolve relation.

c77acf0bbc/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSource.scala (L415-L421)

### Does this PR introduce _any_ user-facing change?

Yes, `spark.read.json()` and `spark.read.csv()` not respect the user-given schema and always turn it into a nullable schema by default.

### How was this patch tested?

New test.

Closes #33436 from cfmcgrady/SPARK-35912-v3.

Authored-by: Fu Chen <cfmcgrady@gmail.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
2021-07-22 11:12:36 +09:00
shane knapp ad528a007a [SPARK-32797][SPARK-32391][SPARK-33242][SPARK-32666][ANSIBLE] updating a bunch of python packages
### What changes were proposed in this pull request?
updating the anaconda py36 environment file

### Why are the changes needed?
see:
https://issues.apache.org/jira/browse/SPARK-32666
https://issues.apache.org/jira/browse/SPARK-33242
https://issues.apache.org/jira/browse/SPARK-32391
https://issues.apache.org/jira/browse/SPARK-32797

### Does this PR introduce _any_ user-facing change?
no

### How was this patch tested?
jenkins will test this

Closes #33469 from shaneknapp/updating-python-paks.

Authored-by: shane knapp <incomplete@gmail.com>
Signed-off-by: shane knapp <incomplete@gmail.com>
2021-07-21 15:22:06 -07:00
Takuya UESHIN d506815a92 [SPARK-36188][PYTHON] Add categories setter to CategoricalAccessor and CategoricalIndex
### What changes were proposed in this pull request?

Add categories setter to `CategoricalAccessor` and `CategoricalIndex`.

### Why are the changes needed?

We should implement categories setter in `CategoricalAccessor` and `CategoricalIndex`.

### Does this PR introduce _any_ user-facing change?

Yes, users will be able to use categories setter.

### How was this patch tested?

Added some tests.

Closes #33448 from ueshin/issues/SPARK-36188/categories_setter.

Authored-by: Takuya UESHIN <ueshin@databricks.com>
Signed-off-by: Takuya UESHIN <ueshin@databricks.com>
2021-07-21 11:31:30 -07:00
Kent Yao 4cd6cfc773 [SPARK-36213][SQL] Normalize PartitionSpec for Describe Table Command with PartitionSpec
### What changes were proposed in this pull request?

This fixes a case sensitivity issue for desc table commands with partition specified.

### Why are the changes needed?

bugfix

### Does this PR introduce _any_ user-facing change?

yes, but it's a bugfix

### How was this patch tested?

new tests

#### before
```
+-- !query
+DESC EXTENDED t PARTITION (C='Us', D=1)
+-- !query schema
+struct<>
+-- !query output
+org.apache.spark.sql.AnalysisException
+Partition spec is invalid. The spec (C, D) must match the partition spec (c, d) defined in table '`default`.`t`'
+
```

#### after

https://github.com/apache/spark/pull/33424/files#diff-554189c49950974a948f99fa9b7436f615052511660c6a0ae3062fa8ca0a327cR328

Closes #33424 from yaooqinn/SPARK-36213.

Authored-by: Kent Yao <yao@apache.org>
Signed-off-by: Kent Yao <yao@apache.org>
2021-07-22 00:52:31 +08:00
Shardul Mahadik 685c3fd05b [SPARK-28266][SQL] convertToLogicalRelation should not interpret path property when reading Hive tables
### What changes were proposed in this pull request?

For non-datasource Hive tables, e.g. tables written outside of Spark (through Hive or Trino), we have certain optimzations in Spark where we use Spark ORC and Parquet datasources to read these tables ([Ref](fbf53dee37/sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveMetastoreCatalog.scala (L128))) rather than using the Hive serde.
If such a table contains a `path` property, Spark will try to list this path property in addition to the table location when creating an `InMemoryFileIndex`. ([Ref](fbf53dee37/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSource.scala (L575))) This can lead to wrong data if `path` property points to a directory location or an error if `path` is not a location. A concrete example is provided in [SPARK-28266 (comment)](https://issues.apache.org/jira/browse/SPARK-28266?focusedCommentId=17380170&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17380170).

Since these tables were not written through Spark, Spark should not interpret this `path` property as it can be set by an external system with a different meaning.

### Why are the changes needed?

For better compatibility with Hive tables generated by other platforms (non-Spark)

### Does this PR introduce _any_ user-facing change?

No

### How was this patch tested?

Added unit test

Closes #33328 from shardulm94/spark-28266.

Authored-by: Shardul Mahadik <smahadik@linkedin.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
2021-07-21 22:40:39 +08:00
Wenchen Fan 9c8a3d3975 [SPARK-36228][SQL] Skip splitting a skewed partition when some map outputs are removed
### What changes were proposed in this pull request?

Sometimes, AQE skew join optimization can fail with NPE. This is because AQE tries to get the shuffle block sizes, but some map outputs are missing due to the executor lost or something.

This PR fixes this bug by skipping skew join handling if some map outputs are missing in the `MapOutputTracker`.

### Why are the changes needed?

bug fix

### Does this PR introduce _any_ user-facing change?

no

### How was this patch tested?

a new UT

Closes #33445 from cloud-fan/bug.

Authored-by: Wenchen Fan <wenchen@databricks.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
2021-07-21 22:17:56 +08:00
Kousuke Saruta f56c7b71ff [SPARK-36208][SQL] SparkScriptTransformation should support ANSI interval types
### What changes were proposed in this pull request?

This PR changes `BaseScriptTransformationExec` for `SparkScriptTransformationExec` to support ANSI interval types.

### Why are the changes needed?

`SparkScriptTransformationExec` support `CalendarIntervalType` so it's better to support ANSI interval types as well.

### Does this PR introduce _any_ user-facing change?

No.

### How was this patch tested?

New test.

Closes #33419 from sarutak/script-transformation-interval.

Authored-by: Kousuke Saruta <sarutak@oss.nttdata.com>
Signed-off-by: Max Gekk <max.gekk@gmail.com>
2021-07-21 15:13:01 +03:00
Wenchen Fan 94aece4325 [SPARK-36020][SQL][FOLLOWUP] RemoveRedundantProjects should retain the LOGICAL_PLAN_TAG tag
### What changes were proposed in this pull request?

This is a followup of https://github.com/apache/spark/pull/33222 .

https://github.com/apache/spark/pull/33222 made a mistake that, `RemoveRedundantProjects` may lose the `LOGICAL_PLAN_TAG` tag, even though the logical plan link is retained. This was actually caught by the test `LogicalPlanTagInSparkPlanSuite`, but was not being taken care of.

There is no problem so far, but losing information can always lead to potential bugs.

### Why are the changes needed?

fix a mistake

### Does this PR introduce _any_ user-facing change?

no

### How was this patch tested?

existing test

Closes #33442 from cloud-fan/minor.

Authored-by: Wenchen Fan <wenchen@databricks.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
2021-07-21 14:03:06 +08:00