spark-instrumented-optimizer/python/pyspark/sql
Max Gekk dd03c31ea5 [SPARK-32088][PYTHON][FOLLOWUP] Replace collect() by show() in the example for timestamp_seconds
### What changes were proposed in this pull request?
Modify the example for `timestamp_seconds` and replace `collect()` by `show()`.

### Why are the changes needed?
The SQL config `spark.sql.session.timeZone` doesn't influence on the `collect` in the example. The code below demonstrates that:
```
$ export TZ="UTC"
```
```python
>>> from pyspark.sql.functions import timestamp_seconds
>>> spark.conf.set("spark.sql.session.timeZone", "America/Los_Angeles")
>>> time_df = spark.createDataFrame([(1230219000,)], ['unix_time'])
>>> time_df.select(timestamp_seconds(time_df.unix_time).alias('ts')).collect()
[Row(ts=datetime.datetime(2008, 12, 25, 15, 30))]
```
The expected time is **07:30 but we get 15:30**.

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
By running the modified example via:
```
$ ./python/run-tests --modules=pyspark-sql
```

Closes #28959 from MaxGekk/SPARK-32088-fix-timezone-issue-followup.

Authored-by: Max Gekk <max.gekk@gmail.com>
Signed-off-by: HyukjinKwon <gurwls223@apache.org>
2020-07-01 13:17:49 +09:00
..
avro [SPARK-27506][SQL][FOLLOWUP] Use option avroSchema to specify an evolved schema in from_avro 2019-12-30 18:14:21 +09:00
pandas [SPARK-32098][PYTHON] Use iloc for positional slicing instead of direct slicing in createDataFrame with Arrow 2020-06-25 11:04:47 -07:00
tests [SPARK-32098][PYTHON] Use iloc for positional slicing instead of direct slicing in createDataFrame with Arrow 2020-06-25 11:04:47 -07:00
__init__.py [SPARK-31088][SQL] Add back HiveContext and createExternalTable 2020-03-26 23:51:15 -07:00
catalog.py [SPARK-31088][SQL] Add back HiveContext and createExternalTable 2020-03-26 23:51:15 -07:00
column.py [SPARK-29664][PYTHON][SQL][FOLLOW-UP] Add deprecation warnings for getItem instead 2020-04-27 14:49:22 +09:00
conf.py [SPARK-23698][PYTHON] Resolve undefined names in Python 3 2018-08-22 10:06:59 -07:00
context.py [SPARK-31088][SQL] Add back HiveContext and createExternalTable 2020-03-26 23:51:15 -07:00
dataframe.py [SPARK-31710][SQL] Fail casting numeric to timestamp by default 2020-06-16 08:35:35 +00:00
functions.py [SPARK-32088][PYTHON][FOLLOWUP] Replace collect() by show() in the example for timestamp_seconds 2020-07-01 13:17:49 +09:00
group.py [SPARK-30434][PYTHON][SQL] Move pandas related functionalities into 'pandas' sub-package 2020-01-09 10:22:50 +09:00
readwriter.py [SPARK-31739][PYSPARK][DOCS][MINOR] Fix docstring syntax issues and misplaced space characters 2020-05-18 20:25:02 +09:00
session.py [SPARK-30856][SQL][PYSPARK] Fix SQLContext.getOrCreate() when SparkContext is restarted 2020-02-20 12:21:24 +09:00
streaming.py [SPARK-31739][PYSPARK][DOCS][MINOR] Fix docstring syntax issues and misplaced space characters 2020-05-18 20:25:02 +09:00
types.py [SPARK-30941][PYSPARK] Add a note to asDict to document its behavior when there are duplicate fields 2020-03-09 11:06:45 -07:00
udf.py [SPARK-31965][TESTS][PYTHON] Move doctests related to Java function registration to test conditionally 2020-06-10 21:15:40 -07:00
utils.py [SPARK-31849][PYTHON][SQL][FOLLOW-UP] More correct error message in Python UDF exception message 2020-06-09 10:24:34 +09:00
window.py [SPARK-30188][SQL] Resolve the failed unit tests when enable AQE 2020-01-13 22:55:19 +08:00