[SPARK-27870][PYTHON][FOLLOW-UP] Rename spark.sql.pandas.udf.buffer.size to spark.sql.execution.pandas.udf.buffer.size

### What changes were proposed in this pull request?

This PR renames `spark.sql.pandas.udf.buffer.size` to `spark.sql.execution.pandas.udf.buffer.size` to be more consistent with other pandas configuration prefixes, given:
-  `spark.sql.execution.pandas.arrowSafeTypeConversion`
- `spark.sql.execution.pandas.respectSessionTimeZone`
- `spark.sql.legacy.execution.pandas.groupedMap.assignColumnsByName`
- other configurations like `spark.sql.execution.arrow.*`.

### Why are the changes needed?

To make configuration names consistent.

### Does this PR introduce any user-facing change?

No because this configuration was not released yet.

### How was this patch tested?

Existing tests should cover.

Closes #27450 from HyukjinKwon/SPARK-27870-followup.

Authored-by: HyukjinKwon <gurwls223@apache.org>
Signed-off-by: HyukjinKwon <gurwls223@apache.org>
This commit is contained in:
HyukjinKwon 2020-02-05 11:38:33 +09:00
parent 898716980d
commit 692e3ddb4e
2 changed files with 2 additions and 2 deletions

View file

@ -868,7 +868,7 @@ class ScalarPandasUDFTests(ReusedSQLTestCase):
with QuietTest(self.sc):
with self.sql_conf({"spark.sql.execution.arrow.maxRecordsPerBatch": 1,
"spark.sql.pandas.udf.buffer.size": 4}):
"spark.sql.execution.pandas.udf.buffer.size": 4}):
self.spark.range(10).repartition(1) \
.select(test_close(col("id"))).limit(2).collect()
# wait here because python udf worker will take some time to detect

View file

@ -1600,7 +1600,7 @@ object SQLConf {
.createWithDefault(10000)
val PANDAS_UDF_BUFFER_SIZE =
buildConf("spark.sql.pandas.udf.buffer.size")
buildConf("spark.sql.execution.pandas.udf.buffer.size")
.doc(
s"Same as ${BUFFER_SIZE} but only applies to Pandas UDF executions. If it is not set, " +
s"the fallback is ${BUFFER_SIZE}. Note that Pandas execution requires more than 4 bytes. " +