spark-instrumented-optimizer/python/pyspark/sql
Davies Liu a8332098ce [SPARK-6216] [PYSPARK] check python version of worker with driver
This PR revert #5404, change to pass the version of python in driver into JVM, check it in worker before deserializing closure, then it can works with different major version of Python.

Author: Davies Liu <davies@databricks.com>

Closes #6203 from davies/py_version and squashes the following commits:

b8fb76e [Davies Liu] fix test
6ce5096 [Davies Liu] use string for version
47c6278 [Davies Liu] check python version of worker with driver

(cherry picked from commit 32fbd297dd)
Signed-off-by: Josh Rosen <joshrosen@databricks.com>
2015-05-18 12:55:37 -07:00
..
__init__.py [SPARK-7543] [SQL] [PySpark] split dataframe.py into multiple files 2015-05-15 20:09:23 -07:00
_types.py [SPARK-7073] [SQL] [PySpark] Clean up SQL data type hierarchy in Python 2015-05-15 20:05:33 -07:00
column.py [SPARK-7543] [SQL] [PySpark] split dataframe.py into multiple files 2015-05-15 20:09:23 -07:00
context.py [SPARK-6216] [PYSPARK] check python version of worker with driver 2015-05-18 12:55:37 -07:00
dataframe.py [SPARK-6657] [PYSPARK] Fix doc warnings 2015-05-18 08:35:24 -07:00
functions.py [SPARK-6216] [PYSPARK] check python version of worker with driver 2015-05-18 12:55:37 -07:00
group.py [SPARK-7543] [SQL] [PySpark] split dataframe.py into multiple files 2015-05-15 20:09:23 -07:00
tests.py [SPARK-7548] [SQL] Add explode function for DataFrames 2015-05-14 19:51:00 -07:00