wangyum commented on code in PR #608:
URL: https://github.com/apache/spark-website/pull/608#discussion_r2108081027


##########
releases/_posts/2025-05-23-spark-release-4-0-0.md:
##########
@@ -0,0 +1,857 @@
+---
+layout: post
+title: Spark Release 4.0.0
+categories: []
+tags: []
+status: publish
+type: post
+published: true
+meta:
+  _edit_last: '4'
+  _wpas_done_all: '1'
+---
+
+Apache Spark 4.0.0 marks a significant milestone as the inaugural release in 
the 4.x series, embodying the collective effort of the vibrant open-source 
community. This release is a testament to tremendous collaboration, resolving 
over 5100 tickets with contributions from more than 390 individuals.
+
+Spark Connect continues its rapid advancement, delivering substantial 
improvements:
+- A new lightweight Python client 
([pyspark-client](https://pypi.org/project/pyspark-client)) at just 1.5 MB.
+- An additional release tarball with Spark Connect enabled by default.
+- Full API compatibility for the Java client.
+- A new spark.api.mode configuration to easily turn on/off Spark Connect for 
your applications.
+- Greatly expanded API coverage.
+- ML on Spark Connect.
+- A new client implementation for 
[Swift](https://github.com/apache/spark-connect-swift).
+
+Spark SQL is significantly enriched with powerful new features designed to 
boost expressiveness and versatility for SQL workloads, such as VARIANT data 
type support, SQL user-defined functions, session variables, pipe syntax, and 
string collation.
+
+PySpark sees continuous dedication to both its functional breadth and the 
overall developer experience, bringing a native plotting API, a new Python Data 
Source API, support for Python UDTFs, and unified profiling for PySpark UDFs, 
alongside numerous other enhancements.
+
+Structured Streaming evolves with key additions that provide greater control 
and ease of debugging, notably the introduction of the Arbitrary State API v2 
for more flexible state management and the State Data Source for easier 
debugging.
+
+To download Apache Spark 4.0.0, please visit the 
[downloads](https://spark.apache.org/downloads.html) page. For [detailed 
changes](https://issues.apache.org/jira/projects/SPARK/versions/12353359), you 
can consult JIRA. We have also curated a list of high-level changes here, 
grouped by major modules.
+
+
+* This will become a table of contents (this text will be scraped).
+{:toc}
+
+
+### Core and Spark SQL Highlights
+
+- [[SPARK-45314]](https://issues.apache.org/jira/browse/SPARK-45314) Drop 
Scala 2.12 and make Scala 2.13 the default
+- [[SPARK-45315]](https://issues.apache.org/jira/browse/SPARK-45315) Drop JDK 
8/11 and make JDK 17 the default
+- [[SPARK-45923]](https://issues.apache.org/jira/browse/SPARK-45923) Spark 
Kubernetes Operator
+- [[SPARK-45869]](https://issues.apache.org/jira/browse/SPARK-45869) Revisit 
and improve Spark Standalone Cluster
+- [[SPARK-42849]](https://issues.apache.org/jira/browse/SPARK-42849) Session 
Variables
+- [[SPARK-44444]](https://issues.apache.org/jira/browse/SPARK-44444) Use ANSI 
SQL mode by default
+- [[SPARK-46057]](https://issues.apache.org/jira/browse/SPARK-46057) Support 
SQL user-defined functions
+- [[SPARK-45827]](https://issues.apache.org/jira/browse/SPARK-45827) Add 
VARIANT data type
+- [[SPARK-49555]](https://issues.apache.org/jira/browse/SPARK-49555) SQL Pipe 
syntax
+- [[SPARK-46830]](https://issues.apache.org/jira/browse/SPARK-46830) String 
Collation support
+- [[SPARK-44265]](https://issues.apache.org/jira/browse/SPARK-44265) Built-in 
XML data source support
+
+
+### Spark Core
+
+- [[SPARK-49524]](https://issues.apache.org/jira/browse/SPARK-49524) Improve 
K8s support
+- [[SPARK-47240]](https://issues.apache.org/jira/browse/SPARK-47240) SPIP: 
Structured Logging Framework for Apache Spark
+- [[SPARK-44893]](https://issues.apache.org/jira/browse/SPARK-44893) 
`ThreadInfo` improvements for monitoring APIs
+- [[SPARK-46861]](https://issues.apache.org/jira/browse/SPARK-46861) Avoid 
Deadlock in DAGScheduler
+- [[SPARK-47764]](https://issues.apache.org/jira/browse/SPARK-47764) Cleanup 
shuffle dependencies based on `ShuffleCleanupMode`
+- [[SPARK-49459]](https://issues.apache.org/jira/browse/SPARK-49459) Support 
CRC32C for Shuffle Checksum
+- [[SPARK-46383]](https://issues.apache.org/jira/browse/SPARK-46383) Reduce 
Driver Heap Usage by shortening `TaskInfo.accumulables()` lifespan
+- [[SPARK-45527]](https://issues.apache.org/jira/browse/SPARK-45527) Use 
fraction-based resource calculation
+- [[SPARK-47172]](https://issues.apache.org/jira/browse/SPARK-47172) Add 
AES-GCM as an optional AES cipher mode for RPC encryption
+- [[SPARK-47448]](https://issues.apache.org/jira/browse/SPARK-47448) Enable 
`spark.shuffle.service.removeShuffle` by default
+- [[SPARK-47674]](https://issues.apache.org/jira/browse/SPARK-47674) Enable 
`spark.metrics.appStatusSource.enabled` by default
+- [[SPARK-48063]](https://issues.apache.org/jira/browse/SPARK-48063) Enable 
`spark.stage.ignoreDecommissionFetchFailure` by default
+- [[SPARK-48268]](https://issues.apache.org/jira/browse/SPARK-48268) Add 
`spark.checkpoint.dir` config
+- [[SPARK-48292]](https://issues.apache.org/jira/browse/SPARK-48292) Revert 
SPARK-39195 (OutputCommitCoordinator) to fix duplication issues
+- [[SPARK-48518]](https://issues.apache.org/jira/browse/SPARK-48518) Make LZF 
compression run in parallel
+- [[SPARK-46132]](https://issues.apache.org/jira/browse/SPARK-46132) Support 
key password for JKS keys for RPC SSL
+- [[SPARK-46456]](https://issues.apache.org/jira/browse/SPARK-46456) Add 
`spark.ui.jettyStopTimeout` to set Jetty server stop timeout
+- [[SPARK-46256]](https://issues.apache.org/jira/browse/SPARK-46256) Parallel 
Compression Support for ZSTD
+- [[SPARK-45544]](https://issues.apache.org/jira/browse/SPARK-45544) Integrate 
SSL support into `TransportContext`
+- [[SPARK-45351]](https://issues.apache.org/jira/browse/SPARK-45351) Change 
`spark.shuffle.service.db.backend` default value to `ROCKSDB`
+- [[SPARK-44741]](https://issues.apache.org/jira/browse/SPARK-44741) Support 
regex-based `MetricFilter` in `StatsdSink`
+- [[SPARK-43987]](https://issues.apache.org/jira/browse/SPARK-43987) Separate 
`finalizeShuffleMerge` Processing to Dedicated Thread Pools
+- [[SPARK-45439]](https://issues.apache.org/jira/browse/SPARK-45439) Reduce 
memory usage of `LiveStageMetrics.accumIdsToMetricType`
+
+
+### Spark SQL
+
+#### Features
+
+- [[SPARK-50541]](https://issues.apache.org/jira/browse/SPARK-50541) Describe 
Table As JSON
+- [[SPARK-48031]](https://issues.apache.org/jira/browse/SPARK-48031) Support 
view schema evolution
+- [[SPARK-50883]](https://issues.apache.org/jira/browse/SPARK-50883) Support 
altering multiple columns in the same command
+- [[SPARK-47627]](https://issues.apache.org/jira/browse/SPARK-47627) Add `SQL 
MERGE` syntax to enable schema evolution
+- [[SPARK-47430]](https://issues.apache.org/jira/browse/SPARK-47430) Support 
`GROUP BY` for `MapType`
+- [[SPARK-49093]](https://issues.apache.org/jira/browse/SPARK-49093) `GROUP 
BY` with MapType nested inside complex type
+- [[SPARK-49098]](https://issues.apache.org/jira/browse/SPARK-49098) Add write 
options for `INSERT`
+- [[SPARK-49451]](https://issues.apache.org/jira/browse/SPARK-49451) Allow 
duplicate keys in `parse_json`
+- [[SPARK-46536]](https://issues.apache.org/jira/browse/SPARK-46536) Support 
`GROUP BY calendar_interval_type`
+- [[SPARK-46908]](https://issues.apache.org/jira/browse/SPARK-46908) Support 
star clause in `WHERE` clause
+- [[SPARK-36680]](https://issues.apache.org/jira/browse/SPARK-36680) Support 
dynamic table options via `WITH OPTIONS` syntax
+- [[SPARK-35553]](https://issues.apache.org/jira/browse/SPARK-35553) Improve 
correlated subqueries
+- [[SPARK-47492]](https://issues.apache.org/jira/browse/SPARK-47492) Widen 
whitespace rules in lexer to allow Unicode
+- [[SPARK-46246]](https://issues.apache.org/jira/browse/SPARK-46246) `EXECUTE 
IMMEDIATE`SQL support
+- [[SPARK-46207]](https://issues.apache.org/jira/browse/SPARK-46207) Support 
`MergeInto` in DataFrameWriterV2
+- [[SPARK-50129]](https://issues.apache.org/jira/browse/SPARK-50129) Add 
DataFrame APIs for subqueries
+- [[SPARK-50075]](https://issues.apache.org/jira/browse/SPARK-50075) DataFrame 
APIs for table-valued functions
+
+
+#### Functions
+
+- [[SPARK-52016]](https://issues.apache.org/jira/browse/SPARK-52016) New 
built-in functions in Spark 4.0
+- [[SPARK-44001]](https://issues.apache.org/jira/browse/SPARK-44001) Add 
option to allow unwrapping protobuf well-known wrapper types
+- [[SPARK-43427]](https://issues.apache.org/jira/browse/SPARK-43427) spark 
protobuf: allow upcasting unsigned integer types
+- [[SPARK-44983]](https://issues.apache.org/jira/browse/SPARK-44983) Convert 
`binary` to `string` by `to_char` for the formats: hex, base64, utf-8
+- [[SPARK-44868]](https://issues.apache.org/jira/browse/SPARK-44868) Convert 
`datetime` to `string` by `to_char`/`to_varchar`
+- [[SPARK-45796]](https://issues.apache.org/jira/browse/SPARK-45796) Support 
`MODE() WITHIN GROUP (ORDER BY col)`
+- [[SPARK-48658]](https://issues.apache.org/jira/browse/SPARK-48658) 
Encode/Decode functions report coding errors instead of mojibake
+- [[SPARK-45034]](https://issues.apache.org/jira/browse/SPARK-45034) Support 
deterministic mode function
+- [[SPARK-44778]](https://issues.apache.org/jira/browse/SPARK-44778) Add the 
alias `TIMEDIFF` for `TIMESTAMPDIFF`
+- [[SPARK-47497]](https://issues.apache.org/jira/browse/SPARK-47497) Make 
`to_csv` support arrays/maps/binary as pretty strings
+- [[SPARK-44840]](https://issues.apache.org/jira/browse/SPARK-44840) Make 
`array_insert()` 1-based for negative indexes
+
+#### Query optimization
+
+- [[SPARK-46946]](https://issues.apache.org/jira/browse/SPARK-46946) 
Supporting broadcast of multiple filtering keys in `DynamicPruning`
+- [[SPARK-48445]](https://issues.apache.org/jira/browse/SPARK-48445) Don’t 
inline UDFs with expansive children
+- [[SPARK-41413]](https://issues.apache.org/jira/browse/SPARK-41413) Avoid 
shuffle in Storage-Partitioned Join when partition keys mismatch, but 
expressions are compatible

Review Comment:
   This optimization was introduced in version 3.4, not 4.0?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to