--- layout: global title: "Migration Guide: SQL, Datasets and DataFrame" displayTitle: "Migration Guide: SQL, Datasets and DataFrame" license: | Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --- * Table of contents {:toc} ## Upgrading from Spark SQL 3.0 to 3.1 - Since Spark 3.1, grouping_id() returns long values. In Spark version 3.0 and earlier, this function returns int values. To restore the behavior before Spark 3.0, you can set `spark.sql.legacy.integerGroupingId` to `true`. ## Upgrading from Spark SQL 2.4 to 3.0 - Since Spark 3.0, when inserting a value into a table column with a different data type, the type coercion is performed as per ANSI SQL standard. Certain unreasonable type conversions such as converting `string` to `int` and `double` to `boolean` are disallowed. A runtime exception will be thrown if the value is out-of-range for the data type of the column. In Spark version 2.4 and earlier, type conversions during table insertion are allowed as long as they are valid `Cast`. When inserting an out-of-range value to a integral field, the low-order bits of the value is inserted(the same as Java/Scala numeric type casting). For example, if 257 is inserted to a field of byte type, the result is 1. The behavior is controlled by the option `spark.sql.storeAssignmentPolicy`, with a default value as "ANSI". Setting the option as "Legacy" restores the previous behavior. - In Spark 3.0, the deprecated methods `SQLContext.createExternalTable` and `SparkSession.createExternalTable` have been removed in favor of its replacement, `createTable`. - In Spark 3.0, the deprecated `HiveContext` class has been removed. Use `SparkSession.builder.enableHiveSupport()` instead. - Since Spark 3.0, configuration `spark.sql.crossJoin.enabled` become internal configuration, and is true by default, so by default spark won't raise exception on sql with implicit cross join. - Since Spark 3.0, we reversed argument order of the trim function from `TRIM(trimStr, str)` to `TRIM(str, trimStr)` to be compatible with other databases. - In Spark version 2.4 and earlier, SQL queries such as `FROM
Operation | Result prior to Spark 3.0 | Result starting Spark 3.0 |
---|---|---|
CAST('infinity' AS DOUBLE) CAST('+infinity' AS DOUBLE) CAST('inf' AS DOUBLE) CAST('+inf' AS DOUBLE) |
NULL | Double.PositiveInfinity |
CAST('-infinity' AS DOUBLE) CAST('-inf' AS DOUBLE) |
NULL | Double.NegativeInfinity |
CAST('infinity' AS FLOAT) CAST('+infinity' AS FLOAT) CAST('inf' AS FLOAT) CAST('+inf' AS FLOAT) |
NULL | Float.PositiveInfinity |
CAST('-infinity' AS FLOAT) CAST('-inf' AS FLOAT) |
NULL | Float.NegativeInfinity |
CAST('nan' AS DOUBLE) | NULL | Double.NaN |
CAST('nan' AS FLOAT) | NULL | Float.NaN |
Query | Spark 2.4 or Prior | Spark 3.0 |
---|---|---|
SELECT CAST(1 AS decimal(38, 18));
|
1
|
1.000000000000000000
|
Property(case sensitive) | Database Reserved | Table Reserved | Remarks |
---|---|---|---|
provider | no | yes | For tables, please use the USING clause to specify it. Once set, it can't be changed. |
location | yes | yes | For databases and tables, please use the LOCATION clause to specify it. |
owner | yes | yes | For databases and tables, it is determined by the user who runs spark and create the table. |
Query | Spark 2.3 or Prior | Spark 2.4 | Remarks |
---|---|---|---|
SELECT array_contains(array(1), 1.34D);
|
true
|
false
|
In Spark 2.4, left and right parameters are promoted to array type of double type and double type respectively. |
SELECT array_contains(array(1), '1');
|
true
|
AnalysisException is thrown.
|
Explicit cast can be used in arguments to avoid the exception. In Spark 2.4, AnalysisException is thrown since integer type can not be promoted to string type in a loss-less manner.
|
SELECT array_contains(array(1), 'anystring');
|
null
|
AnalysisException is thrown.
|
Explicit cast can be used in arguments to avoid the exception. In Spark 2.4, AnalysisException is thrown since integer type can not be promoted to string type in a loss-less manner.
|
InputA \ InputB | NullType | IntegerType | LongType | DecimalType(38,0)* | DoubleType | DateType | TimestampType | StringType |
---|---|---|---|---|---|---|---|---|
NullType | NullType | IntegerType | LongType | DecimalType(38,0) | DoubleType | DateType | TimestampType | StringType |
IntegerType | IntegerType | IntegerType | LongType | DecimalType(38,0) | DoubleType | StringType | StringType | StringType |
LongType | LongType | LongType | LongType | DecimalType(38,0) | StringType | StringType | StringType | StringType |
DecimalType(38,0)* | DecimalType(38,0) | DecimalType(38,0) | DecimalType(38,0) | DecimalType(38,0) | StringType | StringType | StringType | StringType |
DoubleType | DoubleType | DoubleType | StringType | StringType | DoubleType | StringType | StringType | StringType |
DateType | DateType | StringType | StringType | StringType | StringType | DateType | TimestampType | StringType |
TimestampType | TimestampType | StringType | StringType | StringType | StringType | TimestampType | TimestampType | StringType |
StringType | StringType | StringType | StringType | StringType | StringType | StringType | StringType | StringType |