spark-instrumented-optimizer/python/pyspark/pandas/spark/functions.py
Takuya UESHIN 2a335f2d7d [SPARK-34941][PYTHON] Fix mypy errors and enable mypy check for pandas-on-Spark
### What changes were proposed in this pull request?

Fixes `mypy` errors and enables `mypy` check for pandas-on-Spark.

### Why are the changes needed?

The `mypy` check for pandas-on-Spark was disabled when the initial porting.
It should be enabled again; otherwise we will miss type checking errors.

### Does this PR introduce _any_ user-facing change?

No.

### How was this patch tested?

The enabled `mypy` check and existing unit tests should pass.

Closes #32540 from ueshin/issues/SPARK-34941/pandas_mypy.

Authored-by: Takuya UESHIN <ueshin@databricks.com>
Signed-off-by: Takuya UESHIN <ueshin@databricks.com>
2021-05-17 10:46:59 -07:00

43 lines
1.5 KiB
Python

#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""
Additional Spark functions used in pandas-on-Spark.
"""
from pyspark import SparkContext
from pyspark.sql.column import Column, _to_java_column, _create_column_from_literal # type: ignore
def repeat(col, n):
"""
Repeats a string column n times, and returns it as a new string column.
"""
sc = SparkContext._active_spark_context
n = _to_java_column(n) if isinstance(n, Column) else _create_column_from_literal(n)
return _call_udf(sc, "repeat", _to_java_column(col), n)
def _call_udf(sc, name, *cols):
return Column(sc._jvm.functions.callUDF(name, _make_arguments(sc, *cols)))
def _make_arguments(sc, *cols):
java_arr = sc._gateway.new_array(sc._jvm.Column, len(cols))
for i, col in enumerate(cols):
java_arr[i] = col
return java_arr