This is an automated email from the ASF dual-hosted git repository.
dongjoon pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git
The following commit(s) were added to refs/heads/master by this push:
new 9c9af608ac0f [SPARK-54032][CORE] Prefer to use native Netty transports
by default
9c9af608ac0f is described below
commit 9c9af608ac0fd4bb408eccb63ad3739894bf466f
Author: Dongjoon Hyun <[email protected]>
AuthorDate: Sun Oct 26 19:42:01 2025 -0700
[SPARK-54032][CORE] Prefer to use native Netty transports by default
### What changes were proposed in this pull request?
This PR aims to prefer to use native Netty transports by default for Apache
Spark 4.1.0.
### Why are the changes needed?
To help users configure Netty transport libraries easily.
- https://netty.io/wiki/native-transports.html
> Netty provides the following platform specific JNI transports:
> - Linux (since 4.0.16)
> - MacOS/BSD (since 4.1.11)
>
> These JNI transports add features specific to a particular platform,
generate less garbage, and generally improve performance when compared to the
NIO based transport.
### Does this PR introduce _any_ user-facing change?
No.
### How was this patch tested?
Pass the CIs.
### Was this patch authored or co-authored using generative AI tooling?
No.
Closes #52736 from dongjoon-hyun/SPARK-54032.
Authored-by: Dongjoon Hyun <[email protected]>
Signed-off-by: Dongjoon Hyun <[email protected]>
---
.../java/org/apache/spark/network/util/TransportConf.java | 2 +-
.../java/org/apache/spark/network/TransportConfSuite.java | 2 +-
docs/configuration.md | 12 ++++++++++++
docs/core-migration-guide.md | 1 +
4 files changed, 15 insertions(+), 2 deletions(-)
diff --git
a/common/network-common/src/main/java/org/apache/spark/network/util/TransportConf.java
b/common/network-common/src/main/java/org/apache/spark/network/util/TransportConf.java
index 849e58e5db4d..915889dc0aac 100644
---
a/common/network-common/src/main/java/org/apache/spark/network/util/TransportConf.java
+++
b/common/network-common/src/main/java/org/apache/spark/network/util/TransportConf.java
@@ -89,7 +89,7 @@ public class TransportConf {
/** IO mode: NIO, EPOLL, KQUEUE, or AUTO */
public String ioMode() {
- String defaultIOMode = conf.get(SPARK_NETWORK_DEFAULT_IO_MODE_KEY, "NIO");
+ String defaultIOMode = conf.get(SPARK_NETWORK_DEFAULT_IO_MODE_KEY, "AUTO");
return conf.get(SPARK_NETWORK_IO_MODE_KEY,
defaultIOMode).toUpperCase(Locale.ROOT);
}
diff --git
a/common/network-common/src/test/java/org/apache/spark/network/TransportConfSuite.java
b/common/network-common/src/test/java/org/apache/spark/network/TransportConfSuite.java
index f77a93e6247b..f69ba2ac2bbd 100644
---
a/common/network-common/src/test/java/org/apache/spark/network/TransportConfSuite.java
+++
b/common/network-common/src/test/java/org/apache/spark/network/TransportConfSuite.java
@@ -91,7 +91,7 @@ public class TransportConfSuite {
@Test
public void testDefaultIOMode() {
TransportConf c1 = new TransportConf("m1", new
MapConfigProvider(Map.of()));
- assertEquals("NIO", c1.ioMode());
+ assertEquals("AUTO", c1.ioMode());
TransportConf c2 = new TransportConf("m1",
new MapConfigProvider(Map.of("spark.io.mode.default", "KQUEUE")));
diff --git a/docs/configuration.md b/docs/configuration.md
index e9dbfa2b4f03..dc9ca63d24d9 100644
--- a/docs/configuration.md
+++ b/docs/configuration.md
@@ -2652,6 +2652,18 @@ Apart from these, the following properties are also
available, and may be useful
</td>
<td>4.1.0</td>
</tr>
+<tr>
+ <td><code>spark.io.mode.default</code></td>
+ <td>AUTO</td>
+ <td>
+ The default IO mode for Netty transports.
+ One of <code>NIO</code>, <code>EPOLL</code>, <code>KQUEUE</code>, or
<code>AUTO</code>.
+ The default value is <code>AUTO</code> which means to use native Netty
libraries if available.
+ In other words, for Linux environments, <code>EPOLL</code> is used if
available before using <code>NIO</code>.
+ For MacOS/BSD environments, <code>KQUEUE</code> is used if available
before using <code>NIO</code>.
+ </td>
+ <td>4.1.0</td>
+</tr>
<tr>
<td><code>spark.rpc.io.backLog</code></td>
<td>64</td>
diff --git a/docs/core-migration-guide.md b/docs/core-migration-guide.md
index 19b77624d626..1d55c4c3e66d 100644
--- a/docs/core-migration-guide.md
+++ b/docs/core-migration-guide.md
@@ -30,6 +30,7 @@ license: |
- Since Spark 4.1, `java.lang.InternalError` encountered during file reading
will no longer fail the task if the configuration
`spark.sql.files.ignoreCorruptFiles` or the data source option
`ignoreCorruptFiles` is set to `true`.
- Since Spark 4.1, Spark ignores `*.blacklist.*` alternative configuration
names. To restore the behavior before Spark 4.1, you can use the corresponding
configuration names instead which exists since Spark 3.1.0.
- Since Spark 4.1, Spark will use multiple threads for LZF compression to
compress data in parallel. To restore the behavior before Spark 4.1, you can
set `spark.io.compression.lzf.parallel.enabled` to `false`.
+- Since Spark 4.1, Spark uses native Netty IO mode by default. To restore the
behavior before Spark 4.1, you can set `spark.io.mode.default` to `NIO`.
## Upgrading from Core 3.5 to 4.0
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]