spark-instrumented-optimizer/python
Hyukjin Kwon 8d28839689 [SPARK-35946][PYTHON] Respect Py4J server in InheritableThread API
### What changes were proposed in this pull request?

Currently ,we sets the environment variable `PYSPARK_PIN_THREAD` at the client side of `InhertiableThread` API for Py4J (`python/pyspark/util.py`). If the Py4J gateway is created somewhere else (e.g., Zeppelin, etc), it could introduce a breakage at:

```python
from pyspark import SparkContext
jvm = SparkContext._jvm
thread_connection = jvm._gateway_client.get_thread_connection()
# `AttributeError: 'GatewayClient' object has no attribute 'get_thread_connection'` (non-pinned thread mode)
# `get_thread_connection` is only in 'ClientServer' (pinned thread mode)
```

This PR proposes to check the given gateway created, and do the pinned thread mode behaviour accordingly so we can avoid any breakage when Py4J server/gateway is created separately from somewhere else without a pinned thread mode.

### Why are the changes needed?

To avoid any potential breakage.

### Does this PR introduce _any_ user-facing change?

No, the change happened only in the master (fdd7ca5f4e).

### How was this patch tested?

This is actually a partial revert of fdd7ca5f4e. As long as the existing tests pass, I guess we're all good.

I also manually tested to make doubly sure:

**Before**:

```python
>>> from pyspark import InheritableThread, inheritable_thread_target
>>> InheritableThread(lambda: 1).start()
>>> inheritable_thread_target(lambda: 1)()
Traceback (most recent call last):
  File "/.../python3.8/lib/python3.8/threading.py", line 932, in _bootstrap_inner
    self.run()
  File "/.../python3.8/lib/python3.8/threading.py", line 870, in run
    self._target(*self._args, **self._kwargs)
  File "/.../spark/python/pyspark/util.py", line 361, in copy_local_properties
    InheritableThread._clean_py4j_conn_for_current_thread()
  File "/.../spark/python/pyspark/util.py", line 381, in _clean_py4j_conn_for_current_thread
    thread_connection = jvm._gateway_client.get_thread_connection()
AttributeError: 'GatewayClient' object has no attribute 'get_thread_connection'

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/.../spark/python/pyspark/util.py", line 324, in wrapped
    InheritableThread._clean_py4j_conn_for_current_thread()
  File "/.../spark/python/pyspark/util.py", line 381, in _clean_py4j_conn_for_current_thread
    thread_connection = jvm._gateway_client.get_thread_connection()
AttributeError: 'GatewayClient' object has no attribute 'get_thread_connection'
```

**After**:

```python
>>> from pyspark import InheritableThread, inheritable_thread_target
>>> InheritableThread(lambda: 1).start()
>>> inheritable_thread_target(lambda: 1)()
1
```

Closes #33147 from HyukjinKwon/SPARK-35946.

Authored-by: Hyukjin Kwon <gurwls223@apache.org>
Signed-off-by: Dongjoon Hyun <dongjoon@apache.org>
2021-06-29 22:18:54 -07:00
..
docs [SPARK-35605][PYTHON] Move to_pandas_on_spark to the Spark DataFrame 2021-06-28 11:47:09 +09:00
lib [SPARK-34688][PYTHON] Upgrade to Py4J 0.10.9.2 2021-03-11 09:51:41 -06:00
pyspark [SPARK-35946][PYTHON] Respect Py4J server in InheritableThread API 2021-06-29 22:18:54 -07:00
test_coverage [SPARK-7721][PYTHON][TESTS] Adds PySpark coverage generation script 2018-01-22 22:12:50 +09:00
test_support Spelling r common dev mlib external project streaming resource managers python 2020-11-27 10:22:45 -06:00
.coveragerc [SPARK-7721][PYTHON][TESTS] Adds PySpark coverage generation script 2018-01-22 22:12:50 +09:00
.gitignore [SPARK-3946] gitignore in /python includes wrong directory 2014-10-14 14:09:39 -07:00
MANIFEST.in [SPARK-32714][PYTHON] Initial pyspark-stubs port 2020-09-24 14:15:36 +09:00
mypy.ini [SPARK-35466][PYTHON] Fix disallow_untyped_defs mypy checks for pyspark.pandas.data_type_ops.* 2021-06-25 18:16:25 -07:00
pylintrc [SPARK-32435][PYTHON] Remove heapq3 port from Python 3 2020-07-27 20:10:13 +09:00
README.md [SPARK-30884][PYSPARK] Upgrade to Py4J 0.10.9 2020-02-20 09:09:30 -08:00
run-tests [SPARK-29672][PYSPARK] update spark testing framework to use python3 2019-11-14 10:18:55 -08:00
run-tests-with-coverage [SPARK-34968][TEST][PYTHON] Add the -fr argument to xargs rm 2021-04-06 15:20:55 -07:00
run-tests.py [SPARK-32194][PYTHON] Use proper exception classes instead of plain Exception 2021-05-26 11:54:40 +09:00
setup.cfg [SPARK-1267][SPARK-18129] Allow PySpark to be pip installed 2016-11-16 14:22:15 -08:00
setup.py [SPARK-35759][PYTHON] Remove the upperbound for numpy for pandas-on-Spark 2021-06-15 09:59:05 +09:00

Apache Spark

Spark is a unified analytics engine for large-scale data processing. It provides high-level APIs in Scala, Java, Python, and R, and an optimized engine that supports general computation graphs for data analysis. It also supports a rich set of higher-level tools including Spark SQL for SQL and DataFrames, MLlib for machine learning, GraphX for graph processing, and Structured Streaming for stream processing.

https://spark.apache.org/

Online Documentation

You can find the latest Spark documentation, including a programming guide, on the project web page

Python Packaging

This README file only contains basic information related to pip installed PySpark. This packaging is currently experimental and may change in future versions (although we will do our best to keep compatibility). Using PySpark requires the Spark JARs, and if you are building this from source please see the builder instructions at "Building Spark".

The Python packaging for Spark is not intended to replace all of the other use cases. This Python packaged version of Spark is suitable for interacting with an existing cluster (be it Spark standalone, YARN, or Mesos) - but does not contain the tools required to set up your own standalone Spark cluster. You can download the full version of Spark from the Apache Spark downloads page.

NOTE: If you are using this with a Spark standalone cluster you must ensure that the version (including minor version) matches or you may experience odd errors.

Python Requirements

At its core PySpark depends on Py4J, but some additional sub-packages have their own extra requirements for some features (including numpy, pandas, and pyarrow).