e0538bd38c
### What changes were proposed in this pull request? Upgrade Apache Arrow to version 1.0.1 for the Java dependency and increase minimum version of PyArrow to 1.0.0. This release marks a transition to binary stability of the columnar format (which was already informally backward-compatible going back to December 2017) and a transition to Semantic Versioning for the Arrow software libraries. Also note that the Java arrow-memory artifact has been split to separate dependence on netty-buffer and allow users to select an allocator. Spark will continue to use `arrow-memory-netty` to maintain performance benefits. Version 1.0.0 - 1.0.0 include the following selected fixes/improvements relevant to Spark users: ARROW-9300 - [Java] Separate Netty Memory to its own module ARROW-9272 - [C++][Python] Reduce complexity in python to arrow conversion ARROW-9016 - [Java] Remove direct references to Netty/Unsafe Allocators ARROW-8664 - [Java] Add skip null check to all Vector types ARROW-8485 - [Integration][Java] Implement extension types integration ARROW-8434 - [C++] Ipc RecordBatchFileReader deserializes the Schema multiple times ARROW-8314 - [Python] Provide a method to select a subset of columns of a Table ARROW-8230 - [Java] Move Netty memory manager into a separate module ARROW-8229 - [Java] Move ArrowBuf into the Arrow package ARROW-7955 - [Java] Support large buffer for file/stream IPC ARROW-7831 - [Java] unnecessary buffer allocation when calling splitAndTransferTo on variable width vectors ARROW-6111 - [Java] Support LargeVarChar and LargeBinary types and add integration test with C++ ARROW-6110 - [Java] Support LargeList Type and add integration test with C++ ARROW-5760 - [C++] Optimize Take implementation ARROW-300 - [Format] Add body buffer compression option to IPC message protocol using LZ4 or ZSTD ARROW-9098 - RecordBatch::ToStructArray cannot handle record batches with 0 column ARROW-9066 - [Python] Raise correct error in isnull() ARROW-9223 - [Python] Fix to_pandas() export for timestamps within structs ARROW-9195 - [Java] Wrong usage of Unsafe.get from bytearray in ByteFunctionsHelper class ARROW-7610 - [Java] Finish support for 64 bit int allocations ARROW-8115 - [Python] Conversion when mixing NaT and datetime objects not working ARROW-8392 - [Java] Fix overflow related corner cases for vector value comparison ARROW-8537 - [C++] Performance regression from ARROW-8523 ARROW-8803 - [Java] Row count should be set before loading buffers in VectorLoader ARROW-8911 - [C++] Slicing a ChunkedArray with zero chunks segfaults View release notes here: https://arrow.apache.org/release/1.0.1.html https://arrow.apache.org/release/1.0.0.html ### Why are the changes needed? Upgrade brings fixes, improvements and stability guarantees. ### Does this PR introduce _any_ user-facing change? No ### How was this patch tested? Existing tests with pyarrow 1.0.0 and 1.0.1 Closes #29686 from BryanCutler/arrow-upgrade-100-SPARK-32312. Authored-by: Bryan Cutler <cutlerb@gmail.com> Signed-off-by: HyukjinKwon <gurwls223@apache.org>
61 lines
2.6 KiB
Python
61 lines
2.6 KiB
Python
#
|
|
# Licensed to the Apache Software Foundation (ASF) under one or more
|
|
# contributor license agreements. See the NOTICE file distributed with
|
|
# this work for additional information regarding copyright ownership.
|
|
# The ASF licenses this file to You under the Apache License, Version 2.0
|
|
# (the "License"); you may not use this file except in compliance with
|
|
# the License. You may obtain a copy of the License at
|
|
#
|
|
# http://www.apache.org/licenses/LICENSE-2.0
|
|
#
|
|
# Unless required by applicable law or agreed to in writing, software
|
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
# See the License for the specific language governing permissions and
|
|
# limitations under the License.
|
|
#
|
|
|
|
|
|
def require_minimum_pandas_version():
|
|
""" Raise ImportError if minimum version of Pandas is not installed
|
|
"""
|
|
# TODO(HyukjinKwon): Relocate and deduplicate the version specification.
|
|
minimum_pandas_version = "0.23.2"
|
|
|
|
from distutils.version import LooseVersion
|
|
try:
|
|
import pandas
|
|
have_pandas = True
|
|
except ImportError:
|
|
have_pandas = False
|
|
if not have_pandas:
|
|
raise ImportError("Pandas >= %s must be installed; however, "
|
|
"it was not found." % minimum_pandas_version)
|
|
if LooseVersion(pandas.__version__) < LooseVersion(minimum_pandas_version):
|
|
raise ImportError("Pandas >= %s must be installed; however, "
|
|
"your version was %s." % (minimum_pandas_version, pandas.__version__))
|
|
|
|
|
|
def require_minimum_pyarrow_version():
|
|
""" Raise ImportError if minimum version of pyarrow is not installed
|
|
"""
|
|
# TODO(HyukjinKwon): Relocate and deduplicate the version specification.
|
|
minimum_pyarrow_version = "1.0.0"
|
|
|
|
from distutils.version import LooseVersion
|
|
import os
|
|
try:
|
|
import pyarrow
|
|
have_arrow = True
|
|
except ImportError:
|
|
have_arrow = False
|
|
if not have_arrow:
|
|
raise ImportError("PyArrow >= %s must be installed; however, "
|
|
"it was not found." % minimum_pyarrow_version)
|
|
if LooseVersion(pyarrow.__version__) < LooseVersion(minimum_pyarrow_version):
|
|
raise ImportError("PyArrow >= %s must be installed; however, "
|
|
"your version was %s." % (minimum_pyarrow_version, pyarrow.__version__))
|
|
if os.environ.get("ARROW_PRE_0_15_IPC_FORMAT", "0") == "1":
|
|
raise RuntimeError("Arrow legacy IPC format is not supported in PySpark, "
|
|
"please unset ARROW_PRE_0_15_IPC_FORMAT")
|