spark-instrumented-optimizer/python/pyspark/sql/conf.py
HyukjinKwon 4ad9bfd53b [SPARK-32138] Drop Python 2.7, 3.4 and 3.5
### What changes were proposed in this pull request?

This PR aims to drop Python 2.7, 3.4 and 3.5.

Roughly speaking, it removes all the widely known Python 2 compatibility workarounds such as `sys.version` comparison, `__future__`. Also, it removes the Python 2 dedicated codes such as `ArrayConstructor` in Spark.

### Why are the changes needed?

 1. Unsupport EOL Python versions
 2. Reduce maintenance overhead and remove a bit of legacy codes and hacks for Python 2.
 3. PyPy2 has a critical bug that causes a flaky test, SPARK-28358 given my testing and investigation.
 4. Users can use Python type hints with Pandas UDFs without thinking about Python version
 5. Users can leverage one latest cloudpickle, https://github.com/apache/spark/pull/28950. With Python 3.8+ it can also leverage C pickle.

### Does this PR introduce _any_ user-facing change?

Yes, users cannot use Python 2.7, 3.4 and 3.5 in the upcoming Spark version.

### How was this patch tested?

Manually tested and also tested in Jenkins.

Closes #28957 from HyukjinKwon/SPARK-32138.

Authored-by: HyukjinKwon <gurwls223@apache.org>
Signed-off-by: HyukjinKwon <gurwls223@apache.org>
2020-07-14 11:22:44 +09:00

92 lines
2.9 KiB
Python

#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import sys
from pyspark import since, _NoValue
class RuntimeConfig(object):
"""User-facing configuration API, accessible through `SparkSession.conf`.
Options set here are automatically propagated to the Hadoop configuration during I/O.
"""
def __init__(self, jconf):
"""Create a new RuntimeConfig that wraps the underlying JVM object."""
self._jconf = jconf
@since(2.0)
def set(self, key, value):
"""Sets the given Spark runtime configuration property."""
self._jconf.set(key, value)
@since(2.0)
def get(self, key, default=_NoValue):
"""Returns the value of Spark runtime configuration property for the given key,
assuming it is set.
"""
self._checkType(key, "key")
if default is _NoValue:
return self._jconf.get(key)
else:
if default is not None:
self._checkType(default, "default")
return self._jconf.get(key, default)
@since(2.0)
def unset(self, key):
"""Resets the configuration property for the given key."""
self._jconf.unset(key)
def _checkType(self, obj, identifier):
"""Assert that an object is of type str."""
if not isinstance(obj, str):
raise TypeError("expected %s '%s' to be a string (was '%s')" %
(identifier, obj, type(obj).__name__))
@since(2.4)
def isModifiable(self, key):
"""Indicates whether the configuration property with the given key
is modifiable in the current session.
"""
return self._jconf.isModifiable(key)
def _test():
import os
import doctest
from pyspark.sql.session import SparkSession
import pyspark.sql.conf
os.chdir(os.environ["SPARK_HOME"])
globs = pyspark.sql.conf.__dict__.copy()
spark = SparkSession.builder\
.master("local[4]")\
.appName("sql.conf tests")\
.getOrCreate()
globs['sc'] = spark.sparkContext
globs['spark'] = spark
(failure_count, test_count) = doctest.testmod(pyspark.sql.conf, globs=globs)
spark.stop()
if failure_count:
sys.exit(-1)
if __name__ == "__main__":
_test()