This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a commit to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-3.0 by this push:
     new dea237c  [SPARK-31351][DOC] Migration Guide Auditing for Spark 3.0 
Release
dea237c is described below

commit dea237c08d5dad3ac186ec15d7c59e3be86c021e
Author: gatorsmile <[email protected]>
AuthorDate: Wed Apr 8 12:27:40 2020 +0900

    [SPARK-31351][DOC] Migration Guide Auditing for Spark 3.0 Release
    
    This PR is to audit the migration guides in Spark 3.0 release:
    
    - correct the grammar errors
    - clean up some items
    - replace HTML table by markdown table
    
    N/A
    
    No
    
    Screenshot:
    
    
![screencapture-127-0-0-1-4000-sql-migration-guide-html-2020-04-04-21_36_29](https://user-images.githubusercontent.com/11567269/78467043-9477d800-76bd-11ea-8ab0-3d51ea5e9fa5.png)
    ![Screen Shot 2020-04-04 at 9 28 13 
PM](https://user-images.githubusercontent.com/11567269/78467045-98a3f580-76bd-11ea-9e4b-927bf12e683a.png)
    ![Screen Shot 2020-04-04 at 9 28 02 
PM](https://user-images.githubusercontent.com/11567269/78467046-98a3f580-76bd-11ea-8ea3-9f13cb8d200b.png)
    ![Screen Shot 2020-04-04 at 9 21 40 
PM](https://user-images.githubusercontent.com/11567269/78467047-993c8c00-76bd-11ea-8c29-91afc68eb590.png)
    
    Closes #28125 from gatorsmile/updateMigrationGuide3.0.
    
    Authored-by: gatorsmile <[email protected]>
    Signed-off-by: HyukjinKwon <[email protected]>
    (cherry picked from commit a3d83948b81cb5c47f6e084801373443f54747d8)
    Signed-off-by: HyukjinKwon <[email protected]>
---
 docs/core-migration-guide.md    |   6 +-
 docs/css/main.css               |  31 ++++
 docs/pyspark-migration-guide.md |  78 ++-------
 docs/sql-migration-guide.md     | 358 ++++++++++++----------------------------
 docs/ss-migration-guide.md      |   2 +-
 5 files changed, 162 insertions(+), 313 deletions(-)

diff --git a/docs/core-migration-guide.md b/docs/core-migration-guide.md
index fdb0afa..66a489b 100644
--- a/docs/core-migration-guide.md
+++ b/docs/core-migration-guide.md
@@ -26,7 +26,7 @@ license: |
 
 - The `org.apache.spark.ExecutorPlugin` interface and related configuration 
has been replaced with
   `org.apache.spark.plugin.SparkPlugin`, which adds new functionality. Plugins 
using the old
-  interface need to be modified to extend the new interfaces. Check the
+  interface must be modified to extend the new interfaces. Check the
   [Monitoring](monitoring.html) guide for more details.
 
 - Deprecated method `TaskContext.isRunningLocally` has been removed. Local 
execution was removed and it always has returned `false`.
@@ -35,6 +35,6 @@ license: |
 
 - Deprecated method `AccumulableInfo.apply` have been removed because creating 
`AccumulableInfo` is disallowed.
 
-- Event log file will be written as UTF-8 encoding, and Spark History Server 
will replay event log files as UTF-8 encoding. Previously Spark writes event 
log file as default charset of driver JVM process, so Spark History Server of 
Spark 2.x is needed to read the old event log files in case of incompatible 
encoding.
+- Event log file will be written as UTF-8 encoding, and Spark History Server 
will replay event log files as UTF-8 encoding. Previously Spark wrote the event 
log file as default charset of driver JVM process, so Spark History Server of 
Spark 2.x is needed to read the old event log files in case of incompatible 
encoding.
 
-- A new protocol for fetching shuffle blocks is used. It's recommended that 
external shuffle services be upgraded when running Spark 3.0 apps. Old external 
shuffle services can still be used by setting the configuration 
`spark.shuffle.useOldFetchProtocol` to `true`. Otherwise, Spark may run into 
errors with messages like `IllegalArgumentException: Unexpected message type: 
<number>`.
+- A new protocol for fetching shuffle blocks is used. It's recommended that 
external shuffle services be upgraded when running Spark 3.0 apps. You can 
still use old external shuffle services by setting the configuration 
`spark.shuffle.useOldFetchProtocol` to `true`. Otherwise, Spark may run into 
errors with messages like `IllegalArgumentException: Unexpected message type: 
<number>`.
diff --git a/docs/css/main.css b/docs/css/main.css
index dc05d28..bb34d6e 100755
--- a/docs/css/main.css
+++ b/docs/css/main.css
@@ -2,6 +2,37 @@
    Author's custom styles
    ========================================================================== 
*/
 
+table {
+  margin: 15px 0;
+  padding: 0;
+}
+
+table tr {
+  border-top: 1px solid #cccccc;
+  background-color: white;
+  margin: 0;
+  padding: 0;
+}
+
+table tr:nth-child(2n) {
+  background-color: #F1F4F5;
+}
+
+table tr th {
+  font-weight: bold;
+  border: 1px solid #cccccc;
+  text-align: left;
+  margin: 0;
+  padding: 6px 13px;
+}
+
+table tr td {
+  border: 1px solid #cccccc;
+  text-align: left;
+  margin: 0;
+  padding: 6px 13px;
+}
+
 .navbar .brand {
   height: 50px;
   width: 110px;
diff --git a/docs/pyspark-migration-guide.md b/docs/pyspark-migration-guide.md
index 03e062f..92388ff 100644
--- a/docs/pyspark-migration-guide.md
+++ b/docs/pyspark-migration-guide.md
@@ -27,67 +27,25 @@ Many items of SQL migration can be applied when migrating 
PySpark to higher vers
 Please refer [Migration Guide: SQL, Datasets and 
DataFrame](sql-migration-guide.html).
 
 ## Upgrading from PySpark 2.4 to 3.0
+- In Spark 3.0, PySpark requires a pandas version of 0.23.2 or higher to use 
pandas related functionality, such as `toPandas`, `createDataFrame` from pandas 
DataFrame, and so on.
 
-  - Since Spark 3.0, PySpark requires a Pandas version of 0.23.2 or higher to 
use Pandas related functionality, such as `toPandas`, `createDataFrame` from 
Pandas DataFrame, etc.
-
-  - Since Spark 3.0, PySpark requires a PyArrow version of 0.12.1 or higher to 
use PyArrow related functionality, such as `pandas_udf`, `toPandas` and 
`createDataFrame` with "spark.sql.execution.arrow.enabled=true", etc.
-
-  - In PySpark, when creating a `SparkSession` with 
`SparkSession.builder.getOrCreate()`, if there is an existing `SparkContext`, 
the builder was trying to update the `SparkConf` of the existing `SparkContext` 
with configurations specified to the builder, but the `SparkContext` is shared 
by all `SparkSession`s, so we should not update them. Since 3.0, the builder 
comes to not update the configurations. This is the same behavior as Java/Scala 
API in 2.3 and above. If you want to update th [...]
-
-  - In PySpark, when Arrow optimization is enabled, if Arrow version is higher 
than 0.11.0, Arrow can perform safe type conversion when converting 
Pandas.Series to Arrow array during serialization. Arrow will raise errors when 
detecting unsafe type conversion like overflow. Setting 
`spark.sql.execution.pandas.convertToArrowArraySafely` to true can enable it. 
The default setting is false. PySpark's behavior for Arrow versions is 
illustrated in the table below:
-    <table class="table">
-        <tr>
-          <th>
-            <b>PyArrow version</b>
-          </th>
-          <th>
-            <b>Integer Overflow</b>
-          </th>
-          <th>
-            <b>Floating Point Truncation</b>
-          </th>
-        </tr>
-        <tr>
-          <td>
-            version < 0.11.0
-          </td>
-          <td>
-            Raise error
-          </td>
-          <td>
-            Silently allows
-          </td>
-        </tr>
-        <tr>
-          <td>
-            version > 0.11.0, arrowSafeTypeConversion=false
-          </td>
-          <td>
-            Silent overflow
-          </td>
-          <td>
-            Silently allows
-          </td>
-        </tr>
-        <tr>
-          <td>
-            version > 0.11.0, arrowSafeTypeConversion=true
-          </td>
-          <td>
-            Raise error
-          </td>
-          <td>
-            Raise error
-          </td>
-        </tr>
-    </table>
-
-  - Since Spark 3.0, `createDataFrame(..., verifySchema=True)` validates 
`LongType` as well in PySpark. Previously, `LongType` was not verified and 
resulted in `None` in case the value overflows. To restore this behavior, 
`verifySchema` can be set to `False` to disable the validation.
-
-  - Since Spark 3.0, `Column.getItem` is fixed such that it does not call 
`Column.apply`. Consequently, if `Column` is used as an argument to `getItem`, 
the indexing operator should be used.
-    For example, `map_col.getItem(col('id'))` should be replaced with 
`map_col[col('id')]`.
-
-  - As of Spark 3.0 `Row` field names are no longer sorted alphabetically when 
constructing with named arguments for Python versions 3.6 and above, and the 
order of fields will match that as entered. To enable sorted fields by default, 
as in Spark 2.4, set the environment variable 
`PYSPARK_ROW_FIELD_SORTING_ENABLED` to "true" for both executors and driver - 
this environment variable must be consistent on all executors and driver; 
otherwise, it may cause failures or incorrect answers. For [...]
+- In Spark 3.0, PySpark requires a PyArrow version of 0.12.1 or higher to use 
PyArrow related functionality, such as `pandas_udf`, `toPandas` and 
`createDataFrame` with "spark.sql.execution.arrow.enabled=true", etc.
+
+- In PySpark, when creating a `SparkSession` with 
`SparkSession.builder.getOrCreate()`, if there is an existing `SparkContext`, 
the builder was trying to update the `SparkConf` of the existing `SparkContext` 
with configurations specified to the builder, but the `SparkContext` is shared 
by all `SparkSession`s, so we should not update them. In 3.0, the builder comes 
to not update the configurations. This is the same behavior as Java/Scala API 
in 2.3 and above. If you want to update them, y [...]
+
+- In PySpark, when Arrow optimization is enabled, if Arrow version is higher 
than 0.11.0, Arrow can perform safe type conversion when converting 
`pandas.Series` to an Arrow array during serialization. Arrow raises errors 
when detecting unsafe type conversions like overflow. You enable it by setting 
`spark.sql.execution.pandas.convertToArrowArraySafely` to `true`. The default 
setting is `false`. PySpark behavior for Arrow versions is illustrated in the 
following table:
+
+  | PyArrow version  | Integer overflow | Floating point truncation |
+  | ---------------- | ---------------- | ------------------------- |
+  | 0.11.0 and below | Raise error      | Silently allows           |
+  | \> 0.11.0, arrowSafeTypeConversion=false | Silent overflow  | Silently 
allows |
+  | \> 0.11.0, arrowSafeTypeConversion=true  | Raise error      | Raise error |
+   
+- In Spark 3.0, `createDataFrame(..., verifySchema=True)` validates `LongType` 
as well in PySpark. Previously, `LongType` was not verified and resulted in 
`None` in case the value overflows. To restore this behavior, `verifySchema` 
can be set to `False` to disable the validation.
+
+- In Spark 3.0, `Column.getItem` is fixed such that it does not call 
`Column.apply`. Consequently, if `Column` is used as an argument to `getItem`, 
the indexing operator should be used. For example, `map_col.getItem(col('id'))` 
should be replaced with `map_col[col('id')]`.
+
+- As of Spark 3.0, `Row` field names are no longer sorted alphabetically when 
constructing with named arguments for Python versions 3.6 and above, and the 
order of fields will match that as entered. To enable sorted fields by default, 
as in Spark 2.4, set the environment variable 
`PYSPARK_ROW_FIELD_SORTING_ENABLED` to `true` for both executors and driver - 
this environment variable must be consistent on all executors and driver; 
otherwise, it may cause failures or incorrect answers. For  [...]
 
 ## Upgrading from PySpark 2.3 to 2.4
 
diff --git a/docs/sql-migration-guide.md b/docs/sql-migration-guide.md
index 5a2b848..f5c81e9 100644
--- a/docs/sql-migration-guide.md
+++ b/docs/sql-migration-guide.md
@@ -26,329 +26,189 @@ license: |
 
 ### Dataset/DataFrame APIs
 
-  - Since Spark 3.0, the Dataset and DataFrame API `unionAll` is not 
deprecated any more. It is an alias for `union`.
+  - In Spark 3.0, the Dataset and DataFrame API `unionAll` is no longer 
deprecated. It is an alias for `union`.
 
-  - In Spark version 2.4 and earlier, `Dataset.groupByKey` results to a 
grouped dataset with key attribute wrongly named as "value", if the key is 
non-struct type, e.g. int, string, array, etc. This is counterintuitive and 
makes the schema of aggregation queries weird. For example, the schema of 
`ds.groupByKey(...).count()` is `(value, count)`. Since Spark 3.0, we name the 
grouping attribute to "key". The old behaviour is preserved under a newly added 
configuration `spark.sql.legacy.data [...]
+  - In Spark 2.4 and below, `Dataset.groupByKey` results to a grouped dataset 
with key attribute is wrongly named as "value", if the key is non-struct type, 
for example, int, string, array, etc. This is counterintuitive and makes the 
schema of aggregation queries unexpected. For example, the schema of 
`ds.groupByKey(...).count()` is `(value, count)`. Since Spark 3.0, we name the 
grouping attribute to "key". The old behavior is preserved under a newly added 
configuration `spark.sql.legacy [...]
 
 ### DDL Statements
 
-  - Since Spark 3.0, `CREATE TABLE` without a specific provider will use the 
value of `spark.sql.sources.default` as its provider. In Spark version 2.4 and 
earlier, it was hive. To restore the behavior before Spark 3.0, you can set 
`spark.sql.legacy.createHiveTableByDefault.enabled` to `true`.
+  - In Spark 3.0, `CREATE TABLE` without a specific provider uses the value of 
`spark.sql.sources.default` as its provider. In Spark version 2.4 and below, it 
was Hive. To restore the behavior before Spark 3.0, you can set 
`spark.sql.legacy.createHiveTableByDefault.enabled` to `true`.
 
-  - Since Spark 3.0, when inserting a value into a table column with a 
different data type, the type coercion is performed as per ANSI SQL standard. 
Certain unreasonable type conversions such as converting `string` to `int` and 
`double` to `boolean` are disallowed. A runtime exception will be thrown if the 
value is out-of-range for the data type of the column. In Spark version 2.4 and 
earlier, type conversions during table insertion are allowed as long as they 
are valid `Cast`. When inse [...]
+  - In Spark 3.0, when inserting a value into a table column with a different 
data type, the type coercion is performed as per ANSI SQL standard. Certain 
unreasonable type conversions such as converting `string` to `int` and `double` 
to `boolean` are disallowed. A runtime exception is thrown if the value is 
out-of-range for the data type of the column. In Spark version 2.4 and below, 
type conversions during table insertion are allowed as long as they are valid 
`Cast`. When inserting an o [...]
 
   - The `ADD JAR` command previously returned a result set with the single 
value 0. It now returns an empty result set.
 
-  - In Spark version 2.4 and earlier, the `SET` command works without any 
warnings even if the specified key is for `SparkConf` entries and it has no 
effect because the command does not update `SparkConf`, but the behavior might 
confuse users. Since 3.0, the command fails if a `SparkConf` key is used. You 
can disable such a check by setting 
`spark.sql.legacy.setCommandRejectsSparkCoreConfs` to `false`.
+  - Spark 2.4 and below: the `SET` command works without any warnings even if 
the specified key is for `SparkConf` entries and it has no effect because the 
command does not update `SparkConf`, but the behavior might confuse users. In 
3.0, the command fails if a `SparkConf` key is used. You can disable such a 
check by setting `spark.sql.legacy.setCommandRejectsSparkCoreConfs` to `false`.
 
-  - Refreshing a cached table would trigger a table uncache operation and then 
a table cache (lazily) operation. In Spark version 2.4 and earlier, the cache 
name and storage level are not preserved before the uncache operation. 
Therefore, the cache name and storage level could be changed unexpectedly. 
Since Spark 3.0, cache name and storage level will be first preserved for cache 
recreation. It helps to maintain a consistent cache behavior upon table 
refreshing.
+  - Refreshing a cached table would trigger a table uncache operation and then 
a table cache (lazily) operation. In Spark version 2.4 and below, the cache 
name and storage level are not preserved before the uncache operation. 
Therefore, the cache name and storage level could be changed unexpectedly. In 
Spark 3.0, cache name and storage level are first preserved for cache 
recreation. It helps to maintain a consistent cache behavior upon table 
refreshing.
 
-  - Since Spark 3.0, the properties listing below become reserved, commands 
will fail if we specify reserved properties in places like `CREATE DATABASE ... 
WITH DBPROPERTIES` and `ALTER TABLE ... SET TBLPROPERTIES`. We need their 
specific clauses to specify them, e.g. `CREATE DATABASE test COMMENT 'any 
comment' LOCATION 'some path'`. We can set 
`spark.sql.legacy.notReserveProperties` to `true` to ignore the 
`ParseException`, in this case, these properties will be silently removed, e.g 
`S [...]
-    <table class="table">
-        <tr>
-          <th>
-            <b>Property(case sensitive)</b>
-          </th>
-          <th>
-            <b>Database Reserved</b>
-          </th>
-          <th>
-            <b>Table Reserved</b>
-          </th>
-          <th>
-            <b>Remarks</b>
-          </th>
-        </tr>
-        <tr>
-          <td>
-            provider
-          </td>
-          <td>
-            no
-          </td>
-          <td>
-            yes
-          </td>
-          <td>
-            For tables, please use the USING clause to specify it. Once set, 
it can't be changed.
-          </td>
-        </tr>
-        <tr>
-          <td>
-            location
-          </td>
-          <td>
-            yes
-          </td>
-          <td>
-            yes
-          </td>
-          <td>
-            For databases and tables, please use the LOCATION clause to 
specify it.
-          </td>
-        </tr>
-        <tr>
-          <td>
-            owner
-          </td>
-          <td>
-            yes
-          </td>
-          <td>
-            yes
-          </td>
-          <td>
-            For databases and tables, it is determined by the user who runs 
spark and create the table.
-          </td>
-        </tr>
-    </table>
+  - In Spark 3.0, the properties listing below become reserved; commands fail 
if you specify reserved properties in places like `CREATE DATABASE ... WITH 
DBPROPERTIES` and `ALTER TABLE ... SET TBLPROPERTIES`. You need their specific 
clauses to specify them, for example, `CREATE DATABASE test COMMENT 'any 
comment' LOCATION 'some path'`. You can set 
`spark.sql.legacy.notReserveProperties` to `true` to ignore the 
`ParseException`, in this case, these properties will be silently removed, for 
[...]
+
+    | Property (case sensitive) | Database Reserved | Table Reserved | Remarks 
|
+    | ------------------------- | ----------------- | -------------- | ------- 
|
+    | provider                 | no                | yes            | For 
tables, use the `USING` clause to specify it. Once set, it can't be changed. |
+    | location                 | yes               | yes            | For 
databases and tables, use the `LOCATION` clause to specify it. |
+    | owner                    | yes               | yes            | For 
databases and tables, it is determined by the user who runs spark and create 
the table. |
 
-  - Since Spark 3.0, `ADD FILE` can be used to add file directories as well. 
Earlier only single files can be added using this command. To restore the 
behaviour of earlier versions, set `spark.sql.legacy.addSingleFileInAddFile` to 
`true`.
+ 
+  - In Spark 3.0, you can use `ADD FILE` to add file directories as well. 
Earlier you could add only single files using this command. To restore the 
behavior of earlier versions, set `spark.sql.legacy.addSingleFileInAddFile` to 
`true`.
 
-  - Since Spark 3.0, `SHOW TBLPROPERTIES` will cause `AnalysisException` if 
the table does not exist. In Spark version 2.4 and earlier, this scenario 
caused `NoSuchTableException`. Also, `SHOW TBLPROPERTIES` on a temporary view 
will cause `AnalysisException`. In Spark version 2.4 and earlier, it returned 
an empty result.
+  - In Spark 3.0, `SHOW TBLPROPERTIES` throws `AnalysisException` if the table 
does not exist. In Spark version 2.4 and below, this scenario caused 
`NoSuchTableException`. Also, `SHOW TBLPROPERTIES` on a temporary view causes 
`AnalysisException`. In Spark version 2.4 and below, it returned an empty 
result.
 
-  - Since Spark 3.0, `SHOW CREATE TABLE` will always return Spark DDL, even 
when the given table is a Hive serde table. For generating Hive DDL, please use 
`SHOW CREATE TABLE AS SERDE` command instead.
+  - In Spark 3.0, `SHOW CREATE TABLE` always returns Spark DDL, even when the 
given table is a Hive SerDe table. For generating Hive DDL, use `SHOW CREATE 
TABLE AS SERDE` command instead.
 
-  - Since Spark 3.0, column of CHAR type is not allowed in non-Hive-Serde 
tables, and CREATE/ALTER TABLE commands will fail if CHAR type is detected. 
Please use STRING type instead. In Spark version 2.4 and earlier, CHAR type is 
treated as STRING type and the length parameter is simply ignored.
+  - In Spark 3.0, column of CHAR type is not allowed in non-Hive-Serde tables, 
and CREATE/ALTER TABLE commands will fail if CHAR type is detected. Please use 
STRING type instead. In Spark version 2.4 and below, CHAR type is treated as 
STRING type and the length parameter is simply ignored.
 
 ### UDFs and Built-in Functions
 
-  - Since Spark 3.0, the `date_add` and `date_sub` functions only accept int, 
smallint, tinyint as the 2nd argument, fractional and non-literal string are 
not valid anymore, e.g. `date_add(cast('1964-05-23' as date), 12.34)` will 
cause `AnalysisException`. Note that, string literals are still allowed, but 
Spark will throw Analysis Exception if the string content is not a valid 
integer. In Spark version 2.4 and earlier, if the 2nd argument is fractional or 
string value, it will be coerced [...]
+  - In Spark 3.0, the `date_add` and `date_sub` functions accepts only int, 
smallint, tinyint as the 2nd argument; fractional and non-literal strings are 
not valid anymore, for example: `date_add(cast('1964-05-23' as date), '12.34')` 
causes `AnalysisException`. Note that, string literals are still allowed, but 
Spark will throw `AnalysisException` if the string content is not a valid 
integer. In Spark version 2.4 and below, if the 2nd argument is fractional or 
string value, it is coerced  [...]
 
-  - Since Spark 3.0, the function `percentile_approx` and its alias 
`approx_percentile` only accept integral value with range in `[1, 2147483647]` 
as its 3rd argument `accuracy`, fractional and string types are disallowed, 
e.g. `percentile_approx(10.0, 0.2, 1.8D)` will cause `AnalysisException`. In 
Spark version 2.4 and earlier, if `accuracy` is fractional or string value, it 
will be coerced to an int value, `percentile_approx(10.0, 0.2, 1.8D)` is 
operated as `percentile_approx(10.0, 0.2 [...]
+  - In Spark 3.0, the function `percentile_approx` and its alias 
`approx_percentile` only accept integral value with range in `[1, 2147483647]` 
as its 3rd argument `accuracy`, fractional and string types are disallowed, for 
example, `percentile_approx(10.0, 0.2, 1.8D)` causes `AnalysisException`. In 
Spark version 2.4 and below, if `accuracy` is fractional or string value, it is 
coerced to an int value, `percentile_approx(10.0, 0.2, 1.8D)` is operated as 
`percentile_approx(10.0, 0.2, 1)`  [...]
 
-  - Since Spark 3.0, an analysis exception will be thrown when hash 
expressions are applied on elements of MapType. To restore the behavior before 
Spark 3.0, set `spark.sql.legacy.allowHashOnMapType` to `true`.
+  - In Spark 3.0, an analysis exception is thrown when hash expressions are 
applied on elements of `MapType`. To restore the behavior before Spark 3.0, set 
`spark.sql.legacy.allowHashOnMapType` to `true`.
 
-  - Since Spark 3.0, when the `array`/`map` function is called without any 
parameters, it returns an empty collection with `NullType` as element type. In 
Spark version 2.4 and earlier, it returns an empty collection with `StringType` 
as element type. To restore the behavior before Spark 3.0, you can set 
`spark.sql.legacy.createEmptyCollectionUsingStringType` to `true`.
+  - In Spark 3.0, when the `array`/`map` function is called without any 
parameters, it returns an empty collection with `NullType` as element type. In 
Spark version 2.4 and below, it returns an empty collection with `StringType` 
as element type. To restore the behavior before Spark 3.0, you can set 
`spark.sql.legacy.createEmptyCollectionUsingStringType` to `true`.
 
-  - Since Spark 3.0, the `from_json` functions supports two modes - 
`PERMISSIVE` and `FAILFAST`. The modes can be set via the `mode` option. The 
default mode became `PERMISSIVE`. In previous versions, behavior of `from_json` 
did not conform to either `PERMISSIVE` nor `FAILFAST`, especially in processing 
of malformed JSON records. For example, the JSON string `{"a" 1}` with the 
schema `a INT` is converted to `null` by previous versions but Spark 3.0 
converts it to `Row(null)`.
+  - In Spark 3.0, the `from_json` functions supports two modes - `PERMISSIVE` 
and `FAILFAST`. The modes can be set via the `mode` option. The default mode 
became `PERMISSIVE`. In previous versions, behavior of `from_json` did not 
conform to either `PERMISSIVE` nor `FAILFAST`, especially in processing of 
malformed JSON records. For example, the JSON string `{"a" 1}` with the schema 
`a INT` is converted to `null` by previous versions but Spark 3.0 converts it 
to `Row(null)`.
 
-  - In Spark version 2.4 and earlier, users can create map values with map 
type key via built-in function such as `CreateMap`, `MapFromArrays`, etc. Since 
Spark 3.0, it's not allowed to create map values with map type key with these 
built-in functions. Users can use `map_entries` function to convert map to 
array<struct<key, value>> as a workaround. In addition, users can still read 
map values with map type key from data source or Java/Scala collections, though 
it is discouraged.
+  - In Spark version 2.4 and below, you can create map values with map type 
key via built-in function such as `CreateMap`, `MapFromArrays`, etc. In Spark 
3.0, it's not allowed to create map values with map type key with these 
built-in functions. Users can use `map_entries` function to convert map to 
array<struct<key, value>> as a workaround. In addition, users can still read 
map values with map type key from data source or Java/Scala collections, though 
it is discouraged.
 
-  - In Spark version 2.4 and earlier, users can create a map with duplicated 
keys via built-in functions like `CreateMap`, `StringToMap`, etc. The behavior 
of map with duplicated keys is undefined, e.g. map look up respects the 
duplicated key appears first, `Dataset.collect` only keeps the duplicated key 
appears last, `MapKeys` returns duplicated keys, etc. Since Spark 3.0, Spark 
will throw RuntimeException while duplicated keys are found. Users can set 
`spark.sql.mapKeyDedupPolicy` to L [...]
+  - In Spark version 2.4 and below, you can create a map with duplicated keys 
via built-in functions like `CreateMap`, `StringToMap`, etc. The behavior of 
map with duplicated keys is undefined, for example, map look up respects the 
duplicated key appears first, `Dataset.collect` only keeps the duplicated key 
appears last, `MapKeys` returns duplicated keys, etc. In Spark 3.0, Spark 
throws `RuntimeException` when duplicated keys are found. You can set 
`spark.sql.mapKeyDedupPolicy` to `LAST [...]
 
-  - Since Spark 3.0, using `org.apache.spark.sql.functions.udf(AnyRef, 
DataType)` is not allowed by default. Set 
`spark.sql.legacy.allowUntypedScalaUDF` to true to keep using it. But please 
note that, in Spark version 2.4 and earlier, if 
`org.apache.spark.sql.functions.udf(AnyRef, DataType)` gets a Scala closure 
with primitive-type argument, the returned UDF will return null if the input 
values is null. However, since Spark 3.0, the UDF will return the default value 
of the Java type if t [...]
+  - In Spark 3.0, using `org.apache.spark.sql.functions.udf(AnyRef, DataType)` 
is not allowed by default. Set `spark.sql.legacy.allowUntypedScalaUDF` to true 
to keep using it. In Spark version 2.4 and below, if 
`org.apache.spark.sql.functions.udf(AnyRef, DataType)` gets a Scala closure 
with primitive-type argument, the returned UDF returns null if the input values 
is null. However, in Spark 3.0, the UDF returns the default value of the Java 
type if the input value is null. For example, ` [...]
 
-  - Since Spark 3.0, a higher-order function `exists` follows the three-valued 
boolean logic, i.e., if the `predicate` returns any `null`s and no `true` is 
obtained, then `exists` will return `null` instead of `false`. For example, 
`exists(array(1, null, 3), x -> x % 2 == 0)` will be `null`. The previous 
behaviour can be restored by setting 
`spark.sql.legacy.followThreeValuedLogicInArrayExists` to `false`.
+  - In Spark 3.0, a higher-order function `exists` follows the three-valued 
boolean logic, that is, if the `predicate` returns any `null`s and no `true` is 
obtained, then `exists` returns `null` instead of `false`. For example, 
`exists(array(1, null, 3), x -> x % 2 == 0)` is `null`. The previous 
behaviorcan be restored by setting 
`spark.sql.legacy.followThreeValuedLogicInArrayExists` to `false`.
 
-  - Since Spark 3.0, the `add_months` function does not adjust the resulting 
date to a last day of month if the original date is a last day of months. For 
example, `select add_months(DATE'2019-02-28', 1)` results `2019-03-28`. In 
Spark version 2.4 and earlier, the resulting date is adjusted when the original 
date is a last day of months. For example, adding a month to `2019-02-28` 
results in `2019-03-31`.
+  - In Spark 3.0, the `add_months` function does not adjust the resulting date 
to a last day of month if the original date is a last day of months. For 
example, `select add_months(DATE'2019-02-28', 1)` results `2019-03-28`. In 
Spark version 2.4 and below, the resulting date is adjusted when the original 
date is a last day of months. For example, adding a month to `2019-02-28` 
results in `2019-03-31`.
 
-  - In Spark version 2.4 and earlier, the `current_timestamp` function returns 
a timestamp with millisecond resolution only. Since Spark 3.0, the function can 
return the result with microsecond resolution if the underlying clock available 
on the system offers such resolution.
+  - In Spark version 2.4 and below, the `current_timestamp` function returns a 
timestamp with millisecond resolution only. In Spark 3.0, the function can 
return the result with microsecond resolution if the underlying clock available 
on the system offers such resolution.
 
-  - Since Spark 3.0, 0-argument Java UDF is executed in the executor side 
identically with other UDFs. In Spark version 2.4 and earlier, 0-argument Java 
UDF alone was executed in the driver side, and the result was propagated to 
executors, which might be more performant in some cases but caused 
inconsistency with a correctness issue in some cases.
+  - In Spark 3.0, a 0-argument Java UDF is executed in the executor side 
identically with other UDFs. In Spark version 2.4 and below, the 0-argument 
Java UDF alone was executed in the driver side, and the result was propagated 
to executors, which might be more performant in some cases but caused 
inconsistency with a correctness issue in some cases.
 
   - The result of `java.lang.Math`'s `log`, `log1p`, `exp`, `expm1`, and `pow` 
may vary across platforms. In Spark 3.0, the result of the equivalent SQL 
functions (including related SQL functions like `LOG10`) return values 
consistent with `java.lang.StrictMath`. In virtually all cases this makes no 
difference in the return value, and the difference is very small, but may not 
exactly match `java.lang.Math` on x86 platforms in cases like, for example, 
`log(3.0)`, whose value varies betwee [...]
 
-  - Since Spark 3.0, `Cast` function processes string literals such as 
'Infinity', '+Infinity', '-Infinity', 'NaN', 'Inf', '+Inf', '-Inf' in case 
insensitive manner when casting the literals to `Double` or `Float` type to 
ensure greater compatibility with other database systems. This behaviour change 
is illustrated in the table below:
-    <table class="table">
-        <tr>
-          <th>
-            <b>Operation</b>
-          </th>
-          <th>
-            <b>Result prior to Spark 3.0</b>
-          </th>
-          <th>
-            <b>Result starting Spark 3.0</b>
-          </th>
-        </tr>
-        <tr>
-          <td>
-            CAST('infinity' AS DOUBLE)<br>
-            CAST('+infinity' AS DOUBLE)<br>
-            CAST('inf' AS DOUBLE)<br>
-            CAST('+inf' AS DOUBLE)<br>
-          </td>
-          <td>
-            NULL
-          </td>
-          <td>
-            Double.PositiveInfinity
-          </td>
-        </tr>
-        <tr>
-          <td>
-            CAST('-infinity' AS DOUBLE)<br>
-            CAST('-inf' AS DOUBLE)<br>
-          </td>
-          <td>
-            NULL
-          </td>
-          <td>
-            Double.NegativeInfinity
-          </td>
-        </tr>
-        <tr>
-          <td>
-            CAST('infinity' AS FLOAT)<br>
-            CAST('+infinity' AS FLOAT)<br>
-            CAST('inf' AS FLOAT)<br>
-            CAST('+inf' AS FLOAT)<br>
-          </td>
-          <td>
-            NULL
-          </td>
-          <td>
-            Float.PositiveInfinity
-          </td>
-        </tr>
-        <tr>
-          <td>
-            CAST('-infinity' AS FLOAT)<br>
-            CAST('-inf' AS FLOAT)<br>
-          </td>
-          <td>
-            NULL
-          </td>
-          <td>
-            Float.NegativeInfinity
-          </td>
-        </tr>
-        <tr>
-          <td>
-            CAST('nan' AS DOUBLE)
-          </td>
-          <td>
-            NULL
-          </td>
-          <td>
-            Double.NaN
-          </td>
-        </tr>
-        <tr>
-          <td>
-            CAST('nan' AS FLOAT)
-          </td>
-          <td>
-            NULL
-          </td>
-          <td>
-            Float.NaN
-          </td>
-        </tr>
-    </table>
+  - In Spark 3.0, the `Cast` function processes string literals such as 
'Infinity', '+Infinity', '-Infinity', 'NaN', 'Inf', '+Inf', '-Inf' in a 
case-insensitive manner when casting the literals to `Double` or `Float` type 
to ensure greater compatibility with other database systems. This behavior 
change is illustrated in the table below:
+
+    | Operation | Result before Spark 3.0 | Result in Spark 3.0 |
+    | --------- | ----------------------- | ------------------- |
+    | CAST('infinity' AS DOUBLE) | NULL | Double.PositiveInfinity |
+    | CAST('+infinity' AS DOUBLE) | NULL | Double.PositiveInfinity |
+    | CAST('inf' AS DOUBLE) | NULL | Double.PositiveInfinity |
+    | CAST('inf' AS DOUBLE) | NULL | Double.PositiveInfinity |
+    | CAST('-infinity' AS DOUBLE) | NULL | Double.NegativeInfinity |
+    | CAST('-inf' AS DOUBLE) | NULL | Double.NegativeInfinity |
+    | CAST('infinity' AS FLOAT) | NULL | Float.PositiveInfinity |
+    | CAST('+infinity' AS FLOAT) | NULL | Float.PositiveInfinity |
+    | CAST('inf' AS FLOAT) | NULL | Float.PositiveInfinity |
+    | CAST('+inf' AS FLOAT) | NULL | Float.PositiveInfinity |
+    | CAST('-infinity' AS FLOAT) | NULL | Float.NegativeInfinity |
+    | CAST('-inf' AS FLOAT) | NULL | Float.NegativeInfinity |
+    | CAST('nan' AS DOUBLE) | NULL | Double.Nan |
+    | CAST('nan' AS FLOAT) | NULL | Float.NaN |
+
+  - In Spark 3.0, when casting interval values to string type, there is no 
"interval" prefix, for example, `1 days 2 hours`. In Spark version 2.4 and 
below, the string contains the "interval" prefix like `interval 1 days 2 hours`.
+
+  - In Spark 3.0, when casting string value to integral types(tinyint, 
smallint, int and bigint), datetime types(date, timestamp and interval) and 
boolean type, the leading and trailing whitespaces (<= ASCII 32) will be 
trimmed before converted to these type values, for example, `cast(' 1\t' as 
int)` results `1`, `cast(' 1\t' as boolean)` results `true`, 
`cast('2019-10-10\t as date)` results the date value `2019-10-10`. In Spark 
version 2.4 and below, when casting string to integrals and [...]
 
-  - Since Spark 3.0, when casting interval values to string type, there is no 
"interval" prefix, e.g. `1 days 2 hours`. In Spark version 2.4 and earlier, the 
string contains the "interval" prefix like `interval 1 days 2 hours`.
+### Query Engine
 
-  - Since Spark 3.0, when casting string value to integral types(tinyint, 
smallint, int and bigint), datetime types(date, timestamp and interval) and 
boolean type, the leading and trailing whitespaces (<= ASCII 32) will be 
trimmed before converted to these type values, e.g. `cast(' 1\t' as int)` 
results `1`, `cast(' 1\t' as boolean)` results `true`, `cast('2019-10-10\t as 
date)` results the date value `2019-10-10`. In Spark version 2.4 and earlier, 
while casting string to integrals and b [...]
+  - In Spark version 2.4 and below, SQL queries such as `FROM <table>` or 
`FROM <table> UNION ALL FROM <table>` are supported by accident. In hive-style 
`FROM <table> SELECT <expr>`, the `SELECT` clause is not negligible. Neither 
Hive nor Presto support this syntax. These queries are treated as invalid in 
Spark 3.0.
 
-### Query Engine
+  - In Spark 3.0, the interval literal syntax does not allow multiple from-to 
units anymore. For example, `SELECT INTERVAL '1-1' YEAR TO MONTH '2-2' YEAR TO 
MONTH'` throws parser exception.
 
-  - In Spark version 2.4 and earlier, SQL queries such as `FROM <table>` or 
`FROM <table> UNION ALL FROM <table>` are supported by accident. In hive-style 
`FROM <table> SELECT <expr>`, the `SELECT` clause is not negligible. Neither 
Hive nor Presto support this syntax. Therefore we will treat these queries as 
invalid since Spark 3.0.
+  - In Spark 3.0, numbers written in scientific notation(for example, `1E2`) 
would be parsed as Double. In Spark version 2.4 and below, they're parsed as 
Decimal. To restore the behavior before Spark 3.0, you can set 
`spark.sql.legacy.exponentLiteralAsDecimal.enabled` to `true`.
 
-  - Since Spark 3.0, the interval literal syntax does not allow multiple 
from-to units anymore. For example, `SELECT INTERVAL '1-1' YEAR TO MONTH '2-2' 
YEAR TO MONTH'` throws parser exception.
+  - In Spark 3.0, day-time interval strings are converted to intervals with 
respect to the `from` and `to` bounds. If an input string does not match to the 
pattern defined by specified bounds, the `ParseException` exception is thrown. 
For example, `interval '2 10:20' hour to minute` raises the exception because 
the expected format is `[+|-]h[h]:[m]m`. In Spark version 2.4, the `from` bound 
was not taken into account, and the `to` bound was used to truncate the 
resulted interval. For inst [...]
 
-  - Since Spark 3.0, numbers written in scientific notation(e.g. `1E2`) would 
be parsed as Double. In Spark version 2.4 and earlier, they're parsed as 
Decimal. To restore the behavior before Spark 3.0, you can set 
`spark.sql.legacy.exponentLiteralAsDecimal.enabled` to `true`.
+  - In Spark 3.0, negative scale of decimal is not allowed by default, for 
example, data type of literal like `1E10BD` is `DecimalType(11, 0)`. In Spark 
version 2.4 and below, it was `DecimalType(2, -9)`. To restore the behavior 
before Spark 3.0, you can set `spark.sql.legacy.allowNegativeScaleOfDecimal` to 
`true`.
 
-  - Since Spark 3.0, day-time interval strings are converted to intervals with 
respect to the `from` and `to` bounds. If an input string does not match to the 
pattern defined by specified bounds, the `ParseException` exception is thrown. 
For example, `interval '2 10:20' hour to minute` raises the exception because 
the expected format is `[+|-]h[h]:[m]m`. In Spark version 2.4, the `from` bound 
was not taken into account, and the `to` bound was used to truncate the 
resulted interval. For i [...]
-  
-  - Since Spark 3.0, negative scale of decimal is not allowed by default, e.g. 
data type of literal like `1E10BD` is `DecimalType(11, 0)`. In Spark version 
2.4 and earlier, it was `DecimalType(2, -9)`. To restore the behavior before 
Spark 3.0, you can set `spark.sql.legacy.allowNegativeScaleOfDecimal` to `true`.
+  - In Spark 3.0, the unary arithmetic operator plus(`+`) only accepts string, 
numeric and interval type values as inputs. Besides, `+` with a integral string 
representation is coerced to a double value, for example, `+'1'` returns `1.0`. 
In Spark version 2.4 and below, this operator is ignored. There is no type 
checking for it, thus, all type values with a `+` prefix are valid, for 
example, `+ array(1, 2)` is valid and results `[1, 2]`. Besides, there is no 
type coercion for it at all,  [...]
 
-  - Since Spark 3.0, the unary arithmetic operator plus(`+`) only accepts 
string, numeric and interval type values as inputs. Besides, `+` with a 
integral string representation will be coerced to double value, e.g. `+'1'` 
results `1.0`. In Spark version 2.4 and earlier, this operator is ignored. 
There is no type checking for it, thus, all type values with a `+` prefix are 
valid, e.g. `+ array(1, 2)` is valid and results `[1, 2]`. Besides, there is no 
type coercion for it at all, e.g. in  [...]
+  - In Spark 3.0, Dataset query fails if it contains ambiguous column 
reference that is caused by self join. A typical example: `val df1 = ...; val 
df2 = df1.filter(...);`, then `df1.join(df2, df1("a") > df2("a"))` returns an 
empty result which is quite confusing. This is because Spark cannot resolve 
Dataset column references that point to tables being self joined, and 
`df1("a")` is exactly the same as `df2("a")` in Spark. To restore the behavior 
before Spark 3.0, you can set `spark.sql. [...]
 
-  - Since Spark 3.0, Dataset query fails if it contains ambiguous column 
reference that is caused by self join. A typical example: `val df1 = ...; val 
df2 = df1.filter(...);`, then `df1.join(df2, df1("a") > df2("a"))` returns an 
empty result which is quite confusing. This is because Spark cannot resolve 
Dataset column references that point to tables being self joined, and 
`df1("a")` is exactly the same as `df2("a")` in Spark. To restore the behavior 
before Spark 3.0, you can set `spark.s [...]
+  - In Spark 3.0, `spark.sql.legacy.ctePrecedencePolicy` is introduced to 
control the behavior for name conflicting in the nested WITH clause. By default 
value `EXCEPTION`, Spark throws an AnalysisException, it forces users to choose 
the specific substitution order they wanted. If set to `CORRECTED` (which is 
recommended), inner CTE definitions take precedence over outer definitions. For 
example, set the config to `false`, `WITH t AS (SELECT 1), t2 AS (WITH t AS 
(SELECT 2) SELECT * FROM  [...]
 
-  - Since Spark 3.0, `spark.sql.legacy.ctePrecedencePolicy` is introduced to 
control the behavior for name conflicting in the nested WITH clause. By default 
value `EXCEPTION`, Spark throws an AnalysisException, it forces users to choose 
the specific substitution order they wanted. If set to `CORRECTED` (which is 
recommended), inner CTE definitions take precedence over outer definitions. For 
example, set the config to `false`, `WITH t AS (SELECT 1), t2 AS (WITH t AS 
(SELECT 2) SELECT * FR [...]
+  - In Spark 3.0, configuration `spark.sql.crossJoin.enabled` become internal 
configuration, and is true by default, so by default spark won't raise 
exception on sql with implicit cross join.
 
-  - Since Spark 3.0, configuration `spark.sql.crossJoin.enabled` become 
internal configuration, and is true by default, so by default spark won't raise 
exception on sql with implicit cross join.
+  - In Spark version 2.4 and below, float/double -0.0 is semantically equal to 
0.0, but -0.0 and 0.0 are considered as different values when used in aggregate 
grouping keys, window partition keys, and join keys. In Spark 3.0, this bug is 
fixed. For example, `Seq(-0.0, 0.0).toDF("d").groupBy("d").count()` returns 
`[(0.0, 2)]` in Spark 3.0, and `[(0.0, 1), (-0.0, 1)]` in Spark 2.4 and below.
 
-  - In Spark version 2.4 and earlier, float/double -0.0 is semantically equal 
to 0.0, but -0.0 and 0.0 are considered as different values when used in 
aggregate grouping keys, window partition keys and join keys. Since Spark 3.0, 
this bug is fixed. For example, `Seq(-0.0, 0.0).toDF("d").groupBy("d").count()` 
returns `[(0.0, 2)]` in Spark 3.0, and `[(0.0, 1), (-0.0, 1)]` in Spark 2.4 and 
earlier.
+  - In Spark version 2.4 and below, invalid time zone ids are silently ignored 
and replaced by GMT time zone, for example, in the from_utc_timestamp function. 
In Spark 3.0, such time zone ids are rejected, and Spark throws 
`java.time.DateTimeException`.
 
-  - In Spark version 2.4 and earlier, invalid time zone ids are silently 
ignored and replaced by GMT time zone, for example, in the from_utc_timestamp 
function. Since Spark 3.0, such time zone ids are rejected, and Spark throws 
`java.time.DateTimeException`.
+  - In Spark 3.0, Proleptic Gregorian calendar is used in parsing, formatting, 
and converting dates and timestamps as well as in extracting sub-components 
like years, days and so on. Spark 3.0 uses Java 8 API classes from the 
`java.time` packages that are based on [ISO 
chronology](https://docs.oracle.com/javase/8/docs/api/java/time/chrono/IsoChronology.html).
 In Spark version 2.4 and below, those operations are performed using the 
hybrid calendar ([Julian + Gregorian](https://docs.oracle [...]
 
-  - Since Spark 3.0, Proleptic Gregorian calendar is used in parsing, 
formatting, and converting dates and timestamps as well as in extracting 
sub-components like years, days and etc. Spark 3.0 uses Java 8 API classes from 
the java.time packages that based on ISO chronology 
(https://docs.oracle.com/javase/8/docs/api/java/time/chrono/IsoChronology.html).
 In Spark version 2.4 and earlier, those operations are performed by using the 
hybrid calendar (Julian + Gregorian, see https://docs.orac [...]
+    * Parsing/formatting of timestamp/date strings. This effects on CSV/JSON 
datasources and on the `unix_timestamp`, `date_format`, `to_unix_timestamp`, 
`from_unixtime`, `to_date`, `to_timestamp` functions when patterns specified by 
users is used for parsing and formatting. In Spark 3.0, we define our own 
pattern strings in `sql-ref-datetime-pattern.md`, which is implemented via 
`java.time.format.DateTimeFormatter` under the hood. New implementation 
performs strict checking of its input [...]
 
-    - Parsing/formatting of timestamp/date strings. This effects on CSV/JSON 
datasources and on the `unix_timestamp`, `date_format`, `to_unix_timestamp`, 
`from_unixtime`, `to_date`, `to_timestamp` functions when patterns specified by 
users is used for parsing and formatting. Since Spark 3.0, we define our own 
pattern strings in `sql-ref-datetime-pattern.md`, which is implemented via 
`java.time.format.DateTimeFormatter` under the hood. New implementation 
performs strict checking of its in [...]
+    * The `weekofyear`, `weekday`, `dayofweek`, `date_trunc`, 
`from_utc_timestamp`, `to_utc_timestamp`, and `unix_timestamp` functions use 
java.time API for calculation week number of year, day number of week as well 
for conversion from/to TimestampType values in UTC time zone.
 
-    - The `weekofyear`, `weekday`, `dayofweek`, `date_trunc`, 
`from_utc_timestamp`, `to_utc_timestamp`, and `unix_timestamp` functions use 
java.time API for calculation week number of year, day number of week as well 
for conversion from/to TimestampType values in UTC time zone.
+    * The JDBC options `lowerBound` and `upperBound` are converted to 
TimestampType/DateType values in the same way as casting strings to 
TimestampType/DateType values. The conversion is based on Proleptic Gregorian 
calendar, and time zone defined by the SQL config `spark.sql.session.timeZone`. 
In Spark version 2.4 and below, the conversion is based on the hybrid calendar 
(Julian + Gregorian) and on default system time zone.
 
-    - the JDBC options `lowerBound` and `upperBound` are converted to 
TimestampType/DateType values in the same way as casting strings to 
TimestampType/DateType values. The conversion is based on Proleptic Gregorian 
calendar, and time zone defined by the SQL config `spark.sql.session.timeZone`. 
In Spark version 2.4 and earlier, the conversion is based on the hybrid 
calendar (Julian + Gregorian) and on default system time zone.
+    * Formatting `TIMESTAMP` and `DATE` literals.
 
-    - Formatting of `TIMESTAMP` and `DATE` literals.
+    * Creating typed `TIMESTAMP` and `DATE` literals from strings. In Spark 
3.0, string conversion to typed `TIMESTAMP`/`DATE` literals is performed via 
casting to `TIMESTAMP`/`DATE` values. For example, `TIMESTAMP '2019-12-23 
12:59:30'` is semantically equal to `CAST('2019-12-23 12:59:30' AS TIMESTAMP)`. 
When the input string does not contain information about time zone, the time 
zone from the SQL config `spark.sql.session.timeZone` is used in that case. In 
Spark version 2.4 and below,  [...]
 
-    - Creating of typed `TIMESTAMP` and `DATE` literals from strings. Since 
Spark 3.0, string conversion to typed `TIMESTAMP`/`DATE` literals is performed 
via casting to `TIMESTAMP`/`DATE` values. For example, `TIMESTAMP '2019-12-23 
12:59:30'` is semantically equal to `CAST('2019-12-23 12:59:30' AS TIMESTAMP)`. 
When the input string does not contain information about time zone, the time 
zone from the SQL config `spark.sql.session.timeZone` is used in that case. In 
Spark version 2.4 and e [...]
+  - In Spark 3.0, `TIMESTAMP` literals are converted to strings using the SQL 
config `spark.sql.session.timeZone`. In Spark version 2.4 and below, the 
conversion uses the default time zone of the Java virtual machine.
 
-  - Since Spark 3.0, `TIMESTAMP` literals are converted to strings using the 
SQL config `spark.sql.session.timeZone`. In Spark version 2.4 and earlier, the 
conversion uses the default time zone of the Java virtual machine.
+  - In Spark 3.0, Spark casts `String` to `Date/Timestamp` in binary 
comparisons with dates/timestamps. The previous behavior of casting 
`Date/Timestamp` to `String` can be restored by setting 
`spark.sql.legacy.typeCoercion.datetimeToString.enabled` to `true`.
 
-  - Since Spark 3.0, Spark will cast `String` to `Date/TimeStamp` in binary 
comparisons with dates/timestamps. The previous behaviour of casting 
`Date/Timestamp` to `String` can be restored by setting 
`spark.sql.legacy.typeCoercion.datetimeToString.enabled` to `true`.
+  - In Spark 3.0, special values are supported in conversion from strings to 
dates and timestamps. Those values are simply notational shorthands that are 
converted to ordinary date or timestamp values when read. The following string 
values are supported for dates:
+    * `epoch [zoneId]` - 1970-01-01
+    * `today [zoneId]` - the current date in the time zone specified by 
`spark.sql.session.timeZone`
+    * `yesterday [zoneId]` - the current date - 1
+    * `tomorrow [zoneId]` - the current date + 1
+    * `now` - the date of running the current query. It has the same notion as 
today
 
-  - Since Spark 3.0, special values are supported in conversion from strings 
to dates and timestamps. Those values are simply notational shorthands that 
will be converted to ordinary date or timestamp values when read. The following 
string values are supported for dates:
-    - `epoch [zoneId]` - 1970-01-01
-    - `today [zoneId]` - the current date in the time zone specified by 
`spark.sql.session.timeZone`
-    - `yesterday [zoneId]` - the current date - 1
-    - `tomorrow [zoneId]` - the current date + 1
-    - `now` - the date of running the current query. It has the same notion as 
today
-  For example `SELECT date 'tomorrow' - date 'yesterday';` should output `2`. 
Here are special timestamp values:
-    - `epoch [zoneId]` - 1970-01-01 00:00:00+00 (Unix system time zero)
-    - `today [zoneId]` - midnight today
-    - `yesterday [zoneId]` - midnight yesterday
-    - `tomorrow [zoneId]` - midnight tomorrow
-    - `now` - current query start time
-  For example `SELECT timestamp 'tomorrow';`.
+    For example `SELECT date 'tomorrow' - date 'yesterday';` should output 
`2`. Here are special timestamp values:
+    * `epoch [zoneId]` - 1970-01-01 00:00:00+00 (Unix system time zero)
+    * `today [zoneId]` - midnight today
+    * `yesterday [zoneId]` - midnight yesterday
+    * `tomorrow [zoneId]` - midnight tomorrow
+    * `now` - current query start time
+
+    For example `SELECT timestamp 'tomorrow';`.
 
   - Since Spark 3.0, when using `EXTRACT` expression to extract the second 
field from date/timestamp values, the result will be a `DecimalType(8, 6)` 
value with 2 digits for second part, and 6 digits for the fractional part with 
microsecond precision. e.g. `extract(second from to_timestamp('2019-09-20 
10:10:10.1'))` results `10.100000`.  In Spark version 2.4 and earlier, it 
returns an `IntegerType` value and the result for the former example is `10`.
 
 ### Data Sources
 
-  - In Spark version 2.4 and earlier, when reading a Hive Serde table with 
Spark native data sources(parquet/orc), Spark will infer the actual file schema 
and update the table schema in metastore. Since Spark 3.0, Spark doesn't infer 
the schema anymore. This should not cause any problems to end users, but if it 
does, please set `spark.sql.hive.caseSensitiveInferenceMode` to 
`INFER_AND_SAVE`.
+  - In Spark version 2.4 and below, when reading a Hive SerDe table with Spark 
native data sources(parquet/orc), Spark infers the actual file schema and 
update the table schema in metastore. In Spark 3.0, Spark doesn't infer the 
schema anymore. This should not cause any problems to end users, but if it 
does, set `spark.sql.hive.caseSensitiveInferenceMode` to `INFER_AND_SAVE`.
 
-  - In Spark version 2.4 and earlier, partition column value is converted as 
null if it can't be casted to corresponding user provided schema. Since 3.0, 
partition column value is validated with user provided schema. An exception is 
thrown if the validation fails. You can disable such validation by setting 
`spark.sql.sources.validatePartitionColumns` to `false`.
+  - In Spark version 2.4 and below, partition column value is converted as 
null if it can't be casted to corresponding user provided schema. In 3.0, 
partition column value is validated with user provided schema. An exception is 
thrown if the validation fails. You can disable such validation by setting  
`spark.sql.sources.validatePartitionColumns` to `false`.
 
-  - Since Spark 3.0, if files or subdirectories disappear during recursive 
directory listing (i.e. they appear in an intermediate listing but then cannot 
be read or listed during later phases of the recursive directory listing, due 
to either concurrent file deletions or object store consistency issues) then 
the listing will fail with an exception unless 
`spark.sql.files.ignoreMissingFiles` is `true` (default `false`). In previous 
versions, these missing files or subdirectories would be i [...]
+  - In Spark 3.0, if files or subdirectories disappear during recursive 
directory listing (that is, they appear in an intermediate listing but then 
cannot be read or listed during later phases of the recursive directory 
listing, due to either concurrent file deletions or object store consistency 
issues) then the listing will fail with an exception unless 
`spark.sql.files.ignoreMissingFiles` is `true` (default `false`). In previous 
versions, these missing files or subdirectories would be  [...]
 
-  - In Spark version 2.4 and earlier, the parser of JSON data source treats 
empty strings as null for some data types such as `IntegerType`. For 
`FloatType`, `DoubleType`, `DateType` and `TimestampType`, it fails on empty 
strings and throws exceptions. Since Spark 3.0, we disallow empty strings and 
will throw exceptions for data types except for `StringType` and `BinaryType`. 
The previous behaviour of allowing empty string can be restored by setting 
`spark.sql.legacy.json.allowEmptyStrin [...]
+  - In Spark version 2.4 and below, the parser of JSON data source treats 
empty strings as null for some data types such as `IntegerType`. For 
`FloatType`, `DoubleType`, `DateType` and `TimestampType`, it fails on empty 
strings and throws exceptions. Spark 3.0 disallows empty strings and will throw 
an exception for data types except for `StringType` and `BinaryType`. The 
previous behavior of allowing an empty string can be restored by setting 
`spark.sql.legacy.json.allowEmptyString.enabl [...]
 
-  - In Spark version 2.4 and earlier, JSON datasource and JSON functions like 
`from_json` convert a bad JSON record to a row with all `null`s in the 
PERMISSIVE mode when specified schema is `StructType`. Since Spark 3.0, the 
returned row can contain non-`null` fields if some of JSON column values were 
parsed and converted to desired types successfully.
+  - In Spark version 2.4 and below, JSON datasource and JSON functions like 
`from_json` convert a bad JSON record to a row with all `null`s in the 
PERMISSIVE mode when specified schema is `StructType`. In Spark 3.0, the 
returned row can contain non-`null` fields if some of JSON column values were 
parsed and converted to desired types successfully.
 
-  - Since Spark 3.0, JSON datasource and JSON function `schema_of_json` infer 
TimestampType from string values if they match to the pattern defined by the 
JSON option `timestampFormat`. Set JSON option `inferTimestamp` to `false` to 
disable such type inferring.
+  - In Spark 3.0, JSON datasource and JSON function `schema_of_json` infer 
TimestampType from string values if they match to the pattern defined by the 
JSON option `timestampFormat`. Set JSON option `inferTimestamp` to `false` to 
disable such type inference.
 
-  - In Spark version 2.4 and earlier, CSV datasource converts a malformed CSV 
string to a row with all `null`s in the PERMISSIVE mode. Since Spark 3.0, the 
returned row can contain non-`null` fields if some of CSV column values were 
parsed and converted to desired types successfully.
+  - In Spark version 2.4 and below, CSV datasource converts a malformed CSV 
string to a row with all `null`s in the PERMISSIVE mode. In Spark 3.0, the 
returned row can contain non-`null` fields if some of CSV column values were 
parsed and converted to desired types successfully.
 
-  - Since Spark 3.0, parquet logical type `TIMESTAMP_MICROS` is used by 
default while saving `TIMESTAMP` columns. In Spark version 2.4 and earlier, 
`TIMESTAMP` columns are saved as `INT96` in parquet files. Note that, some SQL 
systems such as Hive 1.x and Impala 2.x can only read `INT96` timestamps, you 
can set `spark.sql.parquet.outputTimestampType` as `INT96` to restore the 
previous behavior and keep interoperability.
+  - In Spark 3.0, parquet logical type `TIMESTAMP_MICROS` is used by default 
while saving `TIMESTAMP` columns. In Spark version 2.4 and below, `TIMESTAMP` 
columns are saved as `INT96` in parquet files. Note that, some SQL systems such 
as Hive 1.x and Impala 2.x can only read `INT96` timestamps, you can set 
`spark.sql.parquet.outputTimestampType` as `INT96` to restore the previous 
behavior and keep interoperability.
 
-  - Since Spark 3.0, when Avro files are written with user provided schema, 
the fields will be matched by field names between catalyst schema and avro 
schema instead of positions.
+  - In Spark 3.0, when Avro files are written with user provided schema, the 
fields are matched by field names between catalyst schema and Avro schema 
instead of positions.
 
-  - Since Spark 3.0, when Avro files are written with user provided 
non-nullable schema, even the catalyst schema is nullable, Spark is still able 
to write the files. However, Spark will throw runtime NPE if any of the records 
contains null.
+  - In Spark 3.0, when Avro files are written with user provided non-nullable 
schema, even the catalyst schema is nullable, Spark is still able to write the 
files. However, Spark throws runtime NullPointerException if any of the records 
contains null.
 
 ### Others
 
-  - In Spark version 2.4, when a spark session is created via 
`cloneSession()`, the newly created spark session inherits its configuration 
from its parent `SparkContext` even though the same configuration may exist 
with a different value in its parent spark session. Since Spark 3.0, the 
configurations of a parent `SparkSession` have a higher precedence over the 
parent `SparkContext`. The old behavior can be restored by setting 
`spark.sql.legacy.sessionInitWithConfigDefaults` to `true`.
+  - In Spark version 2.4, when a Spark session is created via 
`cloneSession()`, the newly created Spark session inherits its configuration 
from its parent `SparkContext` even though the same configuration may exist 
with a different value in its parent Spark session. In Spark 3.0, the 
configurations of a parent `SparkSession` have a higher precedence over the 
parent `SparkContext`. You can restore the old behavior by setting 
`spark.sql.legacy.sessionInitWithConfigDefaults` to `true`.
 
-  - Since Spark 3.0, if `hive.default.fileformat` is not found in `Spark SQL 
configuration` then it will fallback to hive-site.xml present in the `Hadoop 
configuration` of `SparkContext`.
+  - In Spark 3.0, if `hive.default.fileformat` is not found in `Spark SQL 
configuration` then it falls back to the `hive-site.xml` file present in the 
`Hadoop configuration` of `SparkContext`.
 
-  - Since Spark 3.0, we pad decimal numbers with trailing zeros to the scale 
of the column for `spark-sql` interface, for example:
-    <table class="table">
-        <tr>
-          <th>
-            <b>Query</b>
-          </th>
-          <th>
-            <b>Spark 2.4 or Prior</b>
-          </th>
-          <th>
-            <b>Spark 3.0</b>
-          </th>
-        </tr>
-        <tr>
-          <td>
-            <code>SELECT CAST(1 AS decimal(38, 18));</code>
-          </td>
-          <td>
-            <code>1</code>
-          </td>
-          <td>
-            <code>1.000000000000000000</code>
-          </td>
-        </tr>
-    </table>
+  - In Spark 3.0, we pad decimal numbers with trailing zeros to the scale of 
the column for `spark-sql` interface, for example:
+
+    | Query | Spark 2.4 | Spark 3.0 |
+    | ----- | --------- | --------- |
+    |`SELECT CAST(1 AS decimal(38, 18));` | 1 | 1.000000000000000000 |
+
+  - In Spark 3.0, we upgraded the built-in Hive from 1.2 to 2.3 and it brings 
following impacts:
+
+    * You may need to set `spark.sql.hive.metastore.version` and 
`spark.sql.hive.metastore.jars` according to the version of the Hive metastore 
you want to connect to. For example: set `spark.sql.hive.metastore.version` to 
`1.2.1` and `spark.sql.hive.metastore.jars` to `maven` if your Hive metastore 
version is 1.2.1.
 
-  - Since Spark 3.0, we upgraded the built-in Hive from 1.2 to 2.3 and it 
brings following impacts:
-  
-    - You may need to set `spark.sql.hive.metastore.version` and 
`spark.sql.hive.metastore.jars` according to the version of the Hive metastore 
you want to connect to.
-  For example: set `spark.sql.hive.metastore.version` to `1.2.1` and 
`spark.sql.hive.metastore.jars` to `maven` if your Hive metastore version is 
1.2.1.
-  
-    - You need to migrate your custom SerDes to Hive 2.3 or build your own 
Spark with `hive-1.2` profile. See HIVE-15167 for more details.
+    * You need to migrate your custom SerDes to Hive 2.3 or build your own 
Spark with `hive-1.2` profile. See 
[HIVE-15167](https://issues.apache.org/jira/browse/HIVE-15167) for more details.
 
-    - The decimal string representation can be different between Hive 1.2 and 
Hive 2.3 when using `TRANSFORM` operator in SQL for script transformation, 
which depends on hive's behavior. In Hive 1.2, the string representation omits 
trailing zeroes. But in Hive 2.3, it is always padded to 18 digits with 
trailing zeroes if necessary.
+    * The decimal string representation can be different between Hive 1.2 and 
Hive 2.3 when using `TRANSFORM` operator in SQL for script transformation, 
which depends on hive's behavior. In Hive 1.2, the string representation omits 
trailing zeroes. But in Hive 2.3, it is always padded to 18 digits with 
trailing zeroes if necessary.
 
 ## Upgrading from Spark SQL 2.4.4 to 2.4.5
 
diff --git a/docs/ss-migration-guide.md b/docs/ss-migration-guide.md
index db8fdff..963ef07 100644
--- a/docs/ss-migration-guide.md
+++ b/docs/ss-migration-guide.md
@@ -30,4 +30,4 @@ Please refer [Migration Guide: SQL, Datasets and 
DataFrame](sql-migration-guide.
 
 - In Spark 3.0, Structured Streaming forces the source schema into nullable 
when file-based datasources such as text, json, csv, parquet and orc are used 
via `spark.readStream(...)`. Previously, it respected the nullability in source 
schema; however, it caused issues tricky to debug with NPE. To restore the 
previous behavior, set `spark.sql.streaming.fileSource.schema.forceNullable` to 
`false`.
 
-- Spark 3.0 fixes the correctness issue on Stream-stream outer join, which 
changes the schema of state. (SPARK-26154 for more details) Spark 3.0 will fail 
the query if you start your query from checkpoint constructed from Spark 2.x 
which uses stream-stream outer join. Please discard the checkpoint and replay 
previous inputs to recalculate outputs.
\ No newline at end of file
+- Spark 3.0 fixes the correctness issue on Stream-stream outer join, which 
changes the schema of state. (See 
[SPARK-26154](https://issues.apache.org/jira/browse/SPARK-26154) for more 
details). If you start your query from checkpoint constructed from Spark 2.x 
which uses stream-stream outer join, Spark 3.0 fails the query. To recalculate 
outputs, discard the checkpoint and replay previous inputs.


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to