spark-instrumented-optimizer/python/pyspark/pandas/numpy_compat.py

241 lines
8.8 KiB
Python
Raw Normal View History

[SPARK-34890][PYTHON] Port/integrate Koalas main codes into PySpark ### What changes were proposed in this pull request? As a first step of [SPARK-34849](https://issues.apache.org/jira/browse/SPARK-34849), this PR proposes porting the Koalas main code into PySpark. This PR contains minimal changes to the existing Koalas code as follows: 1. `databricks.koalas` -> `pyspark.pandas` 2. `from databricks import koalas as ks` -> `from pyspark import pandas as pp` 3. `ks.xxx -> pp.xxx` Other than them: 1. Added a line to `python/mypy.ini` in order to ignore the mypy test. See related issue at [SPARK-34941](https://issues.apache.org/jira/browse/SPARK-34941). 2. Added a comment to several lines in several files to ignore the flake8 F401. See related issue at [SPARK-34943](https://issues.apache.org/jira/browse/SPARK-34943). When this PR is merged, all the features that were previously used in [Koalas](https://github.com/databricks/koalas) will be available in PySpark as well. Users can access to the pandas API in PySpark as below: ```python >>> from pyspark import pandas as pp >>> ppdf = pp.DataFrame({"A": [1, 2, 3], "B": [15, 20, 25]}) >>> ppdf A B 0 1 15 1 2 20 2 3 25 ``` The existing "options and settings" in Koalas are also available in the same way: ```python >>> from pyspark.pandas.config import set_option, reset_option, get_option >>> ppser1 = pp.Series([1, 2, 3]) >>> ppser2 = pp.Series([3, 4, 5]) >>> ppser1 + ppser2 Traceback (most recent call last): ... ValueError: Cannot combine the series or dataframe because it comes from a different dataframe. In order to allow this operation, enable 'compute.ops_on_diff_frames' option. >>> set_option("compute.ops_on_diff_frames", True) >>> ppser1 + ppser2 0 4 1 6 2 8 dtype: int64 ``` Please also refer to the [API Reference](https://koalas.readthedocs.io/en/latest/reference/index.html) and [Options and Settings](https://koalas.readthedocs.io/en/latest/user_guide/options.html) for more detail. **NOTE** that this PR intentionally ports the main codes of Koalas first almost as are with minimal changes because: - Koalas project is fairly large. Making some changes together for PySpark will make it difficult to review the individual change. Koalas dev includes multiple Spark committers who will review. By doing this, the committers will be able to more easily and effectively review and drive the development. - Koalas tests and documentation require major changes to make it look great together with PySpark whereas main codes do not require. - We lately froze the Koalas codebase, and plan to work together on the initial porting. By porting the main codes first as are, it unblocks the Koalas dev to work on other items in parallel. I promise and will make sure on: - Rename Koalas to PySpark pandas APIs and/or pandas-on-Spark accordingly in documentation, and the docstrings and comments in the main codes. - Triage APIs to remove that don’t make sense when Koalas is in PySpark The documentation changes will be tracked in [SPARK-34885](https://issues.apache.org/jira/browse/SPARK-34885), the test code changes will be tracked in [SPARK-34886](https://issues.apache.org/jira/browse/SPARK-34886). ### Why are the changes needed? Please refer to: - [[DISCUSS] Support pandas API layer on PySpark](http://apache-spark-developers-list.1001551.n3.nabble.com/DISCUSS-Support-pandas-API-layer-on-PySpark-td30945.html) - [[VOTE] SPIP: Support pandas API layer on PySpark](http://apache-spark-developers-list.1001551.n3.nabble.com/VOTE-SPIP-Support-pandas-API-layer-on-PySpark-td30996.html) ### Does this PR introduce _any_ user-facing change? Yes, now users can use the pandas APIs on Spark ### How was this patch tested? Manually tested for exposed major APIs and options as described above. ### Koalas contributors Koalas would not have been possible without the following contributors: ueshin HyukjinKwon rxin xinrong-databricks RainFung charlesdong1991 harupy floscha beobest2 thunterdb garawalid LucasG0 shril deepyaman gioa fwani 90jam thoo AbdealiJK abishekganesh72 gliptak DumbMachine dvgodoy stbof nitlev hjoo gatorsmile tomspur icexelloss awdavidson guyao akhilputhiry scook12 patryk-oleniuk tracek dennyglee athena15 gstaubli WeichenXu123 hsubbaraj lfdversluis ktksq shengjh margaret-databricks LSturtew sllynn manuzhang jijosg sadikovi Closes #32036 from itholic/SPARK-34890. Authored-by: itholic <haejoon.lee@databricks.com> Signed-off-by: HyukjinKwon <gurwls223@apache.org>
2021-04-05 23:42:39 -04:00
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from collections import OrderedDict
from typing import Callable, Any
import numpy as np
from pyspark.sql import functions as F, Column
from pyspark.sql.types import DoubleType, LongType, BooleanType
unary_np_spark_mappings = OrderedDict(
{
"abs": F.abs,
"absolute": F.abs,
"arccos": F.acos,
"arccosh": F.pandas_udf(lambda s: np.arccosh(s), DoubleType()),
"arcsin": F.asin,
"arcsinh": F.pandas_udf(lambda s: np.arcsinh(s), DoubleType()),
"arctan": F.atan,
"arctanh": F.pandas_udf(lambda s: np.arctanh(s), DoubleType()),
"bitwise_not": F.bitwiseNOT,
"cbrt": F.cbrt,
"ceil": F.ceil,
# It requires complex type which Koalas does not support yet
"conj": lambda _: NotImplemented,
"conjugate": lambda _: NotImplemented, # It requires complex type
"cos": F.cos,
"cosh": F.pandas_udf(lambda s: np.cosh(s), DoubleType()),
"deg2rad": F.pandas_udf(lambda s: np.deg2rad(s), DoubleType()),
"degrees": F.degrees,
"exp": F.exp,
"exp2": F.pandas_udf(lambda s: np.exp2(s), DoubleType()),
"expm1": F.expm1,
"fabs": F.pandas_udf(lambda s: np.fabs(s), DoubleType()),
"floor": F.floor,
"frexp": lambda _: NotImplemented, # 'frexp' output lengths become different
# and it cannot be supported via pandas UDF.
"invert": F.pandas_udf(lambda s: np.invert(s), DoubleType()),
"isfinite": lambda c: c != float("inf"),
"isinf": lambda c: c == float("inf"),
"isnan": F.isnan,
"isnat": lambda c: NotImplemented, # Koalas and PySpark does not have Nat concept.
"log": F.log,
"log10": F.log10,
"log1p": F.log1p,
"log2": F.pandas_udf(lambda s: np.log2(s), DoubleType()),
"logical_not": lambda c: ~(c.cast(BooleanType())),
"matmul": lambda _: NotImplemented, # Can return a NumPy array in pandas.
"negative": lambda c: c * -1,
"positive": lambda c: c,
"rad2deg": F.pandas_udf(lambda s: np.rad2deg(s), DoubleType()),
"radians": F.radians,
"reciprocal": F.pandas_udf(lambda s: np.reciprocal(s), DoubleType()),
"rint": F.pandas_udf(lambda s: np.rint(s), DoubleType()),
"sign": lambda c: F.when(c == 0, 0).when(c < 0, -1).otherwise(1),
"signbit": lambda c: F.when(c < 0, True).otherwise(False),
"sin": F.sin,
"sinh": F.pandas_udf(lambda s: np.sinh(s), DoubleType()),
"spacing": F.pandas_udf(lambda s: np.spacing(s), DoubleType()),
"sqrt": F.sqrt,
"square": F.pandas_udf(lambda s: np.square(s), DoubleType()),
"tan": F.tan,
"tanh": F.pandas_udf(lambda s: np.tanh(s), DoubleType()),
"trunc": F.pandas_udf(lambda s: np.trunc(s), DoubleType()),
}
)
binary_np_spark_mappings = OrderedDict(
{
"arctan2": F.atan2,
"bitwise_and": lambda c1, c2: c1.bitwiseAND(c2),
"bitwise_or": lambda c1, c2: c1.bitwiseOR(c2),
"bitwise_xor": lambda c1, c2: c1.bitwiseXOR(c2),
"copysign": F.pandas_udf(lambda s1, s2: np.copysign(s1, s2), DoubleType()),
"float_power": F.pandas_udf(lambda s1, s2: np.float_power(s1, s2), DoubleType()),
"floor_divide": F.pandas_udf(lambda s1, s2: np.floor_divide(s1, s2), DoubleType()),
"fmax": F.pandas_udf(lambda s1, s2: np.fmax(s1, s2), DoubleType()),
"fmin": F.pandas_udf(lambda s1, s2: np.fmin(s1, s2), DoubleType()),
"fmod": F.pandas_udf(lambda s1, s2: np.fmod(s1, s2), DoubleType()),
"gcd": F.pandas_udf(lambda s1, s2: np.gcd(s1, s2), DoubleType()),
"heaviside": F.pandas_udf(lambda s1, s2: np.heaviside(s1, s2), DoubleType()),
"hypot": F.hypot,
"lcm": F.pandas_udf(lambda s1, s2: np.lcm(s1, s2), DoubleType()),
"ldexp": F.pandas_udf(lambda s1, s2: np.ldexp(s1, s2), DoubleType()),
"left_shift": F.pandas_udf(lambda s1, s2: np.left_shift(s1, s2), LongType()),
"logaddexp": F.pandas_udf(lambda s1, s2: np.logaddexp(s1, s2), DoubleType()),
"logaddexp2": F.pandas_udf(lambda s1, s2: np.logaddexp2(s1, s2), DoubleType()),
"logical_and": lambda c1, c2: c1.cast(BooleanType()) & c2.cast(BooleanType()),
"logical_or": lambda c1, c2: c1.cast(BooleanType()) | c2.cast(BooleanType()),
"logical_xor": lambda c1, c2: (
# mimics xor by logical operators.
(c1.cast(BooleanType()) | c2.cast(BooleanType()))
& (~(c1.cast(BooleanType())) | ~(c2.cast(BooleanType())))
),
"maximum": F.greatest,
"minimum": F.least,
"modf": F.pandas_udf(lambda s1, s2: np.modf(s1, s2), DoubleType()),
"nextafter": F.pandas_udf(lambda s1, s2: np.nextafter(s1, s2), DoubleType()),
"right_shift": F.pandas_udf(lambda s1, s2: np.right_shift(s1, s2), LongType()),
}
)
# Copied from pandas.
# See also https://docs.scipy.org/doc/numpy/reference/arrays.classes.html#standard-array-subclasses
def maybe_dispatch_ufunc_to_dunder_op(
ser_or_index, ufunc: Callable, method: str, *inputs, **kwargs: Any
):
special = {
"add",
"sub",
"mul",
"pow",
"mod",
"floordiv",
"truediv",
"divmod",
"eq",
"ne",
"lt",
"gt",
"le",
"ge",
"remainder",
"matmul",
}
aliases = {
"absolute": "abs",
"multiply": "mul",
"floor_divide": "floordiv",
"true_divide": "truediv",
"power": "pow",
"remainder": "mod",
"divide": "div",
"equal": "eq",
"not_equal": "ne",
"less": "lt",
"less_equal": "le",
"greater": "gt",
"greater_equal": "ge",
}
# For op(., Array) -> Array.__r{op}__
flipped = {
"lt": "__gt__",
"le": "__ge__",
"gt": "__lt__",
"ge": "__le__",
"eq": "__eq__",
"ne": "__ne__",
}
op_name = ufunc.__name__
op_name = aliases.get(op_name, op_name)
def not_implemented(*args, **kwargs):
return NotImplemented
if method == "__call__" and op_name in special and kwargs.get("out") is None:
if isinstance(inputs[0], type(ser_or_index)):
name = "__{}__".format(op_name)
return getattr(ser_or_index, name, not_implemented)(inputs[1])
else:
name = flipped.get(op_name, "__r{}__".format(op_name))
return getattr(ser_or_index, name, not_implemented)(inputs[0])
else:
return NotImplemented
# See also https://docs.scipy.org/doc/numpy/reference/arrays.classes.html#standard-array-subclasses
def maybe_dispatch_ufunc_to_spark_func(
ser_or_index, ufunc: Callable, method: str, *inputs, **kwargs: Any
):
from pyspark.pandas.base import column_op
op_name = ufunc.__name__
if (
method == "__call__"
and (op_name in unary_np_spark_mappings or op_name in binary_np_spark_mappings)
and kwargs.get("out") is None
):
np_spark_map_func = unary_np_spark_mappings.get(op_name) or binary_np_spark_mappings.get(
op_name
)
def convert_arguments(*args):
args = [ # type: ignore
F.lit(inp) if not isinstance(inp, Column) else inp for inp in args
] # type: ignore
return np_spark_map_func(*args)
return column_op(convert_arguments)(*inputs) # type: ignore
else:
return NotImplemented
def _test():
import os
import doctest
import sys
from pyspark.sql import SparkSession
import pyspark.pandas.numpy_compat
os.chdir(os.environ["SPARK_HOME"])
globs = pyspark.pandas.numpy_compat.__dict__.copy()
globs["ps"] = pyspark.pandas
spark = (
SparkSession.builder.master("local[4]")
.appName("pyspark.pandas.numpy_compat tests")
.getOrCreate()
)
(failure_count, test_count) = doctest.testmod(
pyspark.pandas.numpy_compat,
globs=globs,
optionflags=doctest.ELLIPSIS | doctest.NORMALIZE_WHITESPACE,
)
spark.stop()
if failure_count:
sys.exit(-1)
if __name__ == "__main__":
_test()