From c2320a43c7b40c270232e6c0affcbbe01776af61 Mon Sep 17 00:00:00 2001 From: Chao Sun Date: Tue, 26 Jan 2021 15:11:45 -0800 Subject: [PATCH] [SPARK-34052][FOLLOWUP][DOC] Add document in SQL migration guide ### What changes were proposed in this pull request? Add document for the behavior change in SPARK-34052, in SQL migration guide. ### Why are the changes needed? Document behavior change for Spark users. ### Does this PR introduce _any_ user-facing change? No ### How was this patch tested? N/A Closes #31351 from sunchao/SPARK-34052-followup. Authored-by: Chao Sun Signed-off-by: Dongjoon Hyun --- docs/sql-migration-guide.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/docs/sql-migration-guide.md b/docs/sql-migration-guide.md index da092488c9..ff5d985f71 100644 --- a/docs/sql-migration-guide.md +++ b/docs/sql-migration-guide.md @@ -87,6 +87,8 @@ license: | - In Spark 3.1, the temporary view will have same behaviors with the permanent view, i.e. capture and store runtime SQL configs, SQL text, catalog and namespace. The capatured view properties will be applied during the parsing and analysis phases of the view resolution. To restore the behavior before Spark 3.1, you can set `spark.sql.legacy.storeAnalyzedPlanForView` to `true`. + - In Spark 3.1, temporary view created via `CACHE TABLE ... AS SELECT` will also have the same behavior with permanent view. In particular, when the temporary view is dropped, Spark will invalidate all its cache dependents, as well as the cache for the temporary view itself. This is different from Spark 3.0 and below, which only does the latter. To restore the previous behavior, you can set `spark.sql.legacy.storeAnalyzedPlanForView` to `true`. + - Since Spark 3.1, CHAR/CHARACTER and VARCHAR types are supported in the table schema. Table scan/insertion will respect the char/varchar semantic. If char/varchar is used in places other than table schema, an exception will be thrown (CAST is an exception that simply treats char/varchar as string like before). To restore the behavior before Spark 3.1, which treats them as STRING types and ignores a length parameter, e.g. `CHAR(4)`, you can set `spark.sql.legacy.charVarcharAsString` to `true`. - In Spark 3.1, `AnalysisException` is replaced by its sub-classes that are thrown for tables from Hive external catalog in the following situations: