7fc9db0853
### What changes were proposed in this pull request? This PR proposes to infer bytes as binary types in Python 3. See https://github.com/apache/spark/pull/25749 for discussions. I have also checked that Arrow considers `bytes` as binary type, and PySpark UDF can also accepts `bytes` as a binary type. Since `bytes` is not a `str` anymore in Python 3, it's clear to call it `BinaryType` in Python 3. ### Why are the changes needed? To respect Python 3's `bytes` type and support Python's primitive types. ### Does this PR introduce any user-facing change? Yes. **Before:** ```python >>> spark.createDataFrame([[b"abc"]]) Traceback (most recent call last): File "/.../spark/python/pyspark/sql/types.py", line 1036, in _infer_type return _infer_schema(obj) File "/.../spark/python/pyspark/sql/types.py", line 1062, in _infer_schema raise TypeError("Can not infer schema for type: %s" % type(row)) TypeError: Can not infer schema for type: <class 'bytes'> During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/.../spark/python/pyspark/sql/session.py", line 787, in createDataFrame rdd, schema = self._createFromLocal(map(prepare, data), schema) File "/.../spark/python/pyspark/sql/session.py", line 445, in _createFromLocal struct = self._inferSchemaFromList(data, names=schema) File "/.../spark/python/pyspark/sql/session.py", line 377, in _inferSchemaFromList schema = reduce(_merge_type, (_infer_schema(row, names) for row in data)) File "/.../spark/python/pyspark/sql/session.py", line 377, in <genexpr> schema = reduce(_merge_type, (_infer_schema(row, names) for row in data)) File "/.../spark/python/pyspark/sql/types.py", line 1064, in _infer_schema fields = [StructField(k, _infer_type(v), True) for k, v in items] File "/.../spark/python/pyspark/sql/types.py", line 1064, in <listcomp> fields = [StructField(k, _infer_type(v), True) for k, v in items] File "/.../spark/python/pyspark/sql/types.py", line 1038, in _infer_type raise TypeError("not supported type: %s" % type(obj)) TypeError: not supported type: <class 'bytes'> ``` **After:** ```python >>> spark.createDataFrame([[b"abc"]]) DataFrame[_1: binary] ``` ### How was this patch tested? Unittest was added and manually tested. Closes #26432 from HyukjinKwon/SPARK-29798. Authored-by: HyukjinKwon <gurwls223@apache.org> Signed-off-by: Bryan Cutler <cutlerb@gmail.com> |
||
---|---|---|
.. | ||
docs | ||
lib | ||
pyspark | ||
test_coverage | ||
test_support | ||
.coveragerc | ||
.gitignore | ||
MANIFEST.in | ||
pylintrc | ||
README.md | ||
run-tests | ||
run-tests-with-coverage | ||
run-tests.py | ||
setup.cfg | ||
setup.py |
Apache Spark
Spark is a unified analytics engine for large-scale data processing. It provides high-level APIs in Scala, Java, Python, and R, and an optimized engine that supports general computation graphs for data analysis. It also supports a rich set of higher-level tools including Spark SQL for SQL and DataFrames, MLlib for machine learning, GraphX for graph processing, and Structured Streaming for stream processing.
Online Documentation
You can find the latest Spark documentation, including a programming guide, on the project web page
Python Packaging
This README file only contains basic information related to pip installed PySpark. This packaging is currently experimental and may change in future versions (although we will do our best to keep compatibility). Using PySpark requires the Spark JARs, and if you are building this from source please see the builder instructions at "Building Spark".
The Python packaging for Spark is not intended to replace all of the other use cases. This Python packaged version of Spark is suitable for interacting with an existing cluster (be it Spark standalone, YARN, or Mesos) - but does not contain the tools required to set up your own standalone Spark cluster. You can download the full version of Spark from the Apache Spark downloads page.
NOTE: If you are using this with a Spark standalone cluster you must ensure that the version (including minor version) matches or you may experience odd errors.
Python Requirements
At its core PySpark depends on Py4J (currently version 0.10.8.1), but some additional sub-packages have their own extra requirements for some features (including numpy, pandas, and pyarrow).