This is an automated email from the ASF dual-hosted git repository.

kassiez pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/doris-website.git


The following commit(s) were added to refs/heads/master by this push:
     new fa084f29ab3 [doc](ecosystem) update flink version and remove unused 
parameters (#2336)
fa084f29ab3 is described below

commit fa084f29ab34e1fbb4e38872fab242596cf1da62
Author: wudi <w...@selectdb.com>
AuthorDate: Tue Apr 29 10:41:51 2025 +0800

    [doc](ecosystem) update flink version and remove unused parameters (#2336)
    
    ## Versions
    
    - [x] dev
    - [x] 3.0
    - [x] 2.1
    - [x] 2.0
    
    ## Languages
    
    - [x] Chinese
    - [x] English
    
    ## Docs Checklist
    
    - [ ] Checked by AI
    - [ ] Test Cases Built
---
 docs/ecosystem/doris-kafka-connector.md                            | 1 -
 docs/ecosystem/flink-doris-connector.md                            | 3 ++-
 docs/ecosystem/spark-doris-connector.md                            | 5 +----
 .../current/ecosystem/doris-kafka-connector.md                     | 1 -
 .../current/ecosystem/flink-doris-connector.md                     | 3 ++-
 .../current/ecosystem/spark-doris-connector.md                     | 7 ++-----
 .../version-2.1/ecosystem/doris-kafka-connector.md                 | 1 -
 .../version-2.1/ecosystem/flink-doris-connector.md                 | 3 ++-
 .../version-2.1/ecosystem/spark-doris-connector.md                 | 7 ++-----
 .../version-3.0/ecosystem/doris-kafka-connector.md                 | 1 -
 .../version-3.0/ecosystem/flink-doris-connector.md                 | 3 ++-
 .../version-3.0/ecosystem/spark-doris-connector.md                 | 7 ++-----
 versioned_docs/version-2.1/ecosystem/doris-kafka-connector.md      | 1 -
 versioned_docs/version-2.1/ecosystem/flink-doris-connector.md      | 3 ++-
 versioned_docs/version-2.1/ecosystem/spark-doris-connector.md      | 5 +----
 versioned_docs/version-3.0/ecosystem/doris-kafka-connector.md      | 1 -
 versioned_docs/version-3.0/ecosystem/flink-doris-connector.md      | 3 ++-
 versioned_docs/version-3.0/ecosystem/spark-doris-connector.md      | 5 +----
 18 files changed, 21 insertions(+), 39 deletions(-)

diff --git a/docs/ecosystem/doris-kafka-connector.md 
b/docs/ecosystem/doris-kafka-connector.md
index 7baee63a94f..163bde801c9 100644
--- a/docs/ecosystem/doris-kafka-connector.md
+++ b/docs/ecosystem/doris-kafka-connector.md
@@ -215,7 +215,6 @@ errors.deadletterqueue.topic.replication.factor=1
 | enable.delete               | -                                    | false   
                                                                             | 
N            | Whether to delete records synchronously, default false           
                                                                                
                                                                                
                                                                                
               [...]
 | label.prefix                | -                                    | ${name} 
                                                                             | 
N            | Stream load label prefix when importing data. Defaults to the 
Connector application name.                                                     
                                                                                
                                                                                
                  [...]
 | auto.redirect               | -                                    | true    
                                                                             | 
N            | Whether to redirect StreamLoad requests. After being turned on, 
StreamLoad will redirect to the BE where data needs to be written through FE, 
and the BE information will no longer be displayed.                             
                                                                                
                  [...]
-| load.model                  | `stream_load`,<br/> `copy_into`      | 
stream_load                                                                     
     | N            | How to import data. Supports `stream_load` to directly 
import data into Doris; also supports `copy_into` to import data into object 
storage, and then load the data into Doris.                                     
                                                                                
                            [...]
 | sink.properties.*           | -                                    | 
`'sink.properties.format':'json'`, 
<br/>`'sink.properties.read_json_by_line':'true'` | N            | Import 
parameters for Stream Load. <br />For example: define column separator 
`'sink.properties.column_separator':','` <br />Detailed parameter reference 
[here](https://doris.apache.org/docs/data-operate/import/stream-load-manual)  
<br/><br/> **Enable Group Commit**, for example, enable group commit in 
sync_mode mode: [...]
 | delivery.guarantee          | `at_least_once`,<br/> `exactly_once` | 
at_least_once                                                                   
     | N            | How to ensure data consistency when consuming Kafka data 
is imported into Doris. Supports `at_least_once` `exactly_once`, default is 
`at_least_once`. Doris needs to be upgraded to 2.1.0 or above to ensure data 
`exactly_once`                                                                  
                              [...]
 | converter.mode              | `normal`,<br/> `debezium_ingestion`  | normal  
                                                                             | 
N            | Type conversion mode of upstream data when using Connector to 
consume Kafka data. <br/> ```normal``` means consuming data in Kafka normally 
without any type conversion. <br/> ```debezium_ingestion``` means that when 
Kafka upstream data is collected through CDC (Changelog Data Capture) tools 
such as Debezium, the upstr [...]
diff --git a/docs/ecosystem/flink-doris-connector.md 
b/docs/ecosystem/flink-doris-connector.md
index 2dd63884a65..4617ddd231e 100644
--- a/docs/ecosystem/flink-doris-connector.md
+++ b/docs/ecosystem/flink-doris-connector.md
@@ -51,6 +51,7 @@ Using the Flink Connector, you can perform the following 
operations:
 | 24.0.1            | 1.15,1.16,1.17,1.18,1.19,1.20 | 1.0+          | 8        
    | -             |
 | 24.1.0            | 1.15,1.16,1.17,1.18,1.19,1.20 | 1.0+          | 8        
    | -             |
 | 25.0.0            | 1.15,1.16,1.17,1.18,1.19,1.20 | 1.0+          | 8        
    | -             |
+| 25.1.0            | 1.15,1.16,1.17,1.18,1.19,1.20 | 1.0+ | 8 |- |
 
 ## Usage
 
@@ -78,7 +79,7 @@ For example:
 <dependency>
   <groupId>org.apache.doris</groupId>
   <artifactId>flink-doris-connector-1.16</artifactId>
-  <version>25.0.0</version>
+  <version>25.1.0</version>
 </dependency> 
 ```
 
diff --git a/docs/ecosystem/spark-doris-connector.md 
b/docs/ecosystem/spark-doris-connector.md
index 8f7d1081176..566e9661a40 100644
--- a/docs/ecosystem/spark-doris-connector.md
+++ b/docs/ecosystem/spark-doris-connector.md
@@ -419,9 +419,7 @@ insert into 
your_catalog_name.your_doris_db.your_doris_table select * from your_
 | doris.request.tablet.size        | 1             | The number of Doris 
Tablets corresponding to an RDD Partition. The smaller this value is set, the 
more partitions will be generated. This will increase the parallelism on the 
Spark side, but at the same time will cause greater pressure on Doris.          
                                                                                
                                                                                
                         [...]
 | doris.read.field                 | --            | List of column names in 
the Doris table, separated by commas                                            
                                                                                
                                                                                
                                                                                
                                                                                
                [...]
 | doris.batch.size                 | 4064          | The maximum number of 
rows to read data from BE at one time. Increasing this value can reduce the 
number of connections between Spark and Doris. Thereby reducing the extra time 
overhead caused by network delay.                                               
                                                                                
                                                                                
                       [...]
-| doris.exec.mem.limit             | 2147483648    | Memory limit for a single 
query. The default is 2GB, in bytes.                                            
                                                                                
                                                                                
                                                                                
                                                                                
              [...]
-| doris.deserialize.arrow.async    | false         | Whether to support 
asynchronous conversion of Arrow format to RowBatch required for 
spark-doris-connector iteration                                                 
                                                                                
                                                                                
                                                                                
                                    [...]
-| doris.deserialize.queue.size     | 64            | Asynchronous conversion 
of the internal processing queue in Arrow format takes effect when 
doris.deserialize.arrow.async is true                                           
                                                                                
                                                                                
                                                                                
                             [...]
+| doris.exec.mem.limit             | 8589934592    | Memory limit for a single 
query. The default is 8GB, in bytes.                                            
                                                                                
                                                                                
                                                                                
                                                                                
              [...]
 | doris.write.fields               | --            | Specifies the fields (or 
the order of the fields) to write to the Doris table, fileds separated by 
commas.<br/>By default, all fields are written in the order of Doris table 
fields.                                                                         
                                                                                
                                                                                
                          [...]
 | doris.sink.batch.size            | 100000        | Maximum number of lines 
in a single write BE                                                            
                                                                                
                                                                                
                                                                                
                                                                                
                [...]
 | doris.sink.max-retries           | 0             | Number of retries after 
writing BE, Since version 1.3.0, the default value is 0, which means no retries 
are performed by default. When this parameter is set greater than 0, 
batch-level failure retries will be performed, and data of the configured size 
of `doris.sink.batch.size` will be cached in the Spark Executor memory. The 
memory allocation may need to be appropriately increased.                       
                                [...]
@@ -436,7 +434,6 @@ insert into 
your_catalog_name.your_doris_db.your_doris_table select * from your_
 | doris.https.key-store-path       | -             | Https key store path.     
                                                                                
                                                                                
                                                                                
                                                                                
                                                                                
              [...]
 | doris.https.key-store-type       | JKS           | Https key store type.     
                                                                                
                                                                                
                                                                                
                                                                                
                                                                                
              [...]
 | doris.https.key-store-password   | -             | Https key store password. 
                                                                                
                                                                                
                                                                                
                                                                                
                                                                                
              [...]
-| doris.sink.mode                  | stream_load   | Doris sink mode, with 
optional `stream_load` and `copy_into`.                                         
                                                                                
                                                                                
                                                                                
                                                                                
                  [...]
 | doris.read.mode                  | thrift        | Doris read mode, with 
optional `thrift` and `arrow`.                                                  
                                                                                
                                                                                
                                                                                
                                                                                
                  [...]
 | doris.read.arrow-flight-sql.port | -             | Arrow Flight SQL port of 
Doris FE. When `doris.read.mode` is `arrow`, it is used to read data via Arrow 
Flight SQL. For server configuration, see [High-speed data transmission link 
based on Arrow Flight 
SQL](https://doris.apache.org/zh-CN/docs/dev/db-connect/arrow-flight-sql-connect)
                                                                                
                                                                            
[...]
 | doris.sink.label.prefix          | spark-doris   | The import label prefix 
when writing in Stream Load mode.                                               
                                                                                
                                                                                
                                                                                
                                                                                
                [...]
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/ecosystem/doris-kafka-connector.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/ecosystem/doris-kafka-connector.md
index 25e4e638e3b..03047d1eb20 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/ecosystem/doris-kafka-connector.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/ecosystem/doris-kafka-connector.md
@@ -216,7 +216,6 @@ errors.deadletterqueue.topic.replication.factor=1
 | enable.delete               | -                                    | false   
                                                                             | 
N            | 是否同步删除记录,默认 false                                                
                                                                                
                                                                                
                                                                                
               [...]
 | label.prefix                | -                                    | ${name} 
                                                                             | 
N            | Stream load 导入数据时的 label 前缀。默认为 Connector 应用名称。                  
                                                                                
                                                                                
                                                                                
               [...]
 | auto.redirect               | -                                    | true    
                                                                             | 
N            | 是否重定向 StreamLoad 请求。开启后 StreamLoad 将通过 FE 重定向到需要写入数据的 
BE,并且不再显示获取 BE 信息                                                               
                                                                                
                                                                                
                          [...]
-| load.model                  | `stream_load`,<br/> `copy_into`      | 
stream_load                                                                     
     | N            | 导入数据的方式。支持 `stream_load` 直接数据导入到 Doris 中;同时支持 `copy_into` 
的方式导入数据至对象存储中,然后将数据加载至 Doris 中                                                  
                                                                                
                                                                                
                      [...]
 | sink.properties.*           | -                                    | 
`'sink.properties.format':'json'`, 
<br/>`'sink.properties.read_json_by_line':'true'` | N            | Stream Load 
的导入参数。<br />例如:定义列分隔符`'sink.properties.column_separator':','`  <br 
/>详细参数参考[这里](../data-operate/import/import-way/stream-load-manual)。  <br/><br/> 
**开启 Group Commit**,例如开启 sync_mode 模式的 group 
commit:`"sink.properties.group_commit":"sync_mode"`。Group Commit 可以配置 
`off_mode`、`sync_mode`、`async_mode` 三种模式,具体使用 [...]
 | delivery.guarantee          | `at_least_once`,<br/> `exactly_once` | 
at_least_once                                                                   
     | N            | 消费 Kafka 数据导入至 doris 时,数据一致性的保障方式。支持 `at_least_once` 
`exactly_once`,默认为 `at_least_once` 。Doris 需要升级至 2.1.0 以上,才能保障数据的 `exactly_once` 
                                                                                
                                                                                
                           [...]
 | converter.mode              | `normal`,<br/> `debezium_ingestion`  | normal  
                                                                             | 
N            | 使用 Connector 消费 Kafka 数据时,上游数据的类型转换模式。 <br/> ```normal```表示正常消费 
Kafka 中的数据,不经过任何类型转换。 <br/> ```debezium_ingestion```表示当 Kafka 上游的数据通过 Debezium 
等 CDC(Changelog Data Capture,变更数据捕获)工具采集时,上游数据需要经过特殊的类型转换才能支持。                  
                                                                                
                 [...]
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/ecosystem/flink-doris-connector.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/ecosystem/flink-doris-connector.md
index 0313408aeee..ef247a42fda 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/ecosystem/flink-doris-connector.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/ecosystem/flink-doris-connector.md
@@ -54,6 +54,7 @@ under the License.
 | 1.6.1             | 1.15,1.16,1.17,1.18,1.19      | 1.0+          | 8        
    | -             |
 | 24.0.1            | 1.15,1.16,1.17,1.18,1.19,1.20 | 1.0+          | 8        
    | -             |
 | 25.0.0            | 1.15,1.16,1.17,1.18,1.19,1.20 | 1.0+ | 8 |- |
+| 25.1.0            | 1.15,1.16,1.17,1.18,1.19,1.20 | 1.0+ | 8 |- |
 
 ## 使用方式
 
@@ -81,7 +82,7 @@ Maven 中使用的时候,可以直接在 Pom 文件中加入如下依赖
 <dependency>
   <groupId>org.apache.doris</groupId>
   <artifactId>flink-doris-connector-1.16</artifactId>
-  <version>25.0.0</version>
+  <version>25.1.0</version>
 </dependency> 
 ```
 
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/ecosystem/spark-doris-connector.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/ecosystem/spark-doris-connector.md
index e2f0e3d0b6e..4f38bb4aa6c 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/ecosystem/spark-doris-connector.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/ecosystem/spark-doris-connector.md
@@ -420,12 +420,10 @@ insert into 
your_catalog_name.your_doris_db.your_doris_table select * from your_
 | doris.request.tablet.size        | 1              | 一个 RDD Partition 对应的 
Doris Tablet 个数。<br />此数值设置越小,则会生成越多的 Partition。从而提升 Spark 侧的并行度,但同时会对 Doris 
造成更大的压力。                                                                        
                                 |
 | doris.read.field                 | --             | 读取 Doris 
表的列名列表,多列之间使用逗号分隔                                                               
                                                                                
                                          |
 | doris.batch.size                 | 4064           | 一次从 BE 
读取数据的最大行数。增大此数值可减少 Spark 与 Doris 之间建立连接的次数。<br />从而减轻网络延迟所带来的额外时间开销。            
                                                                                
                                            |
-| doris.exec.mem.limit             | 2147483648     | 单个查询的内存限制。默认为 2GB,单位为字节  
                                                                                
                                                                                
                          |
-| doris.deserialize.arrow.async    | false          | 是否支持异步转换 Arrow 格式到 
spark-doris-connector 迭代所需的 RowBatch                                            
                                                                                
                                |
-| doris.deserialize.queue.size     | 64             | 异步转换 Arrow 格式的内部处理队列,当 
doris.deserialize.arrow.async 为 true 时生效                                        
                                                                                
                            |
+| doris.exec.mem.limit             | 8589934592     | 单个查询的内存限制。默认为 8GB,单位为字节  
                                                                                
                                                                                
                          |
 | doris.write.fields               | --             | 指定写入 Doris 
表的字段或者字段顺序,多列之间使用逗号分隔。<br />默认写入时要按照 Doris 表字段顺序写入全部字段。                         
                                                                                
                                        |
 | doris.sink.batch.size            | 100000         | 单次写 BE 的最大行数             
                                                                                
                                                                                
                          |
-| doris.sink.max-retries           | 0              | 写 BE 失败之后的重试次数,从 1.3.0 
版本开始, 默认值为 0,即默认不进行重试。当设置该参数大于 0 时,会进行批次级别的失败重试,会在 Spark Executor 内存中缓存 
`doris.sink.batch.size` 所配置大小的数据,可能需要适当增大内存分配。                                  
                                    |       
+| doris.sink.max-retries           | 0              | 写 BE 失败之后的重试次数,从 1.3.0 
版本开始,默认值为 0,即默认不进行重试。当设置该参数大于 0 时,会进行批次级别的失败重试,会在 Spark Executor 内存中缓存 
`doris.sink.batch.size` 所配置大小的数据,可能需要适当增大内存分配。                                  
                                    |       
 | doris.sink.properties.format     | csv            | Stream Load 
的数据格式。<br/>共支持 3 种格式:csv,json,arrow <br/> 
[更多参数详情](https://doris.apache.org/zh-CN/docs/data-operate/import/stream-load-manual/)
                                                                        |
 | doris.sink.properties.*          | --             | Stream Load 
的导入参数。<br/>例如:<br/>指定列分隔符:`'doris.sink.properties.column_separator' = 
','`等<br/> 
[更多参数详情](https://doris.apache.org/zh-CN/docs/data-operate/import/stream-load-manual/)
                                 |
 | doris.sink.task.partition.size   | --             | Doris 写入任务对应的 Partition 
个数。Spark RDD 经过过滤等操作,最后写入的 Partition 数可能会比较大,但每个 Partition 
对应的记录数比较少,导致写入频率增加和计算资源浪费。<br/>此数值设置越小,可以降低 Doris 写入频率,减少 Doris 合并压力。该参数配合 
doris.sink.task.use.repartition 使用。                  |
@@ -437,7 +435,6 @@ insert into 
your_catalog_name.your_doris_db.your_doris_table select * from your_
 | doris.https.key-store-path       | -              | Https key store 路径。      
                                                                                
                                                                                
                          |
 | doris.https.key-store-type       | JKS            | Https key store 类型。      
                                                                                
                                                                                
                          |
 | doris.https.key-store-password   | -              | Https key store 密码。      
                                                                                
                                                                                
                          |
-| doris.sink.mode                  | stream_load    | Doris 写入模式,可选项 
`stream_load` 和 `copy_info`。                                                    
                                                                                
                                    |
 | doris.read.mode                  | thrift         | Doris 读取模式,可选项 `thrift` 
和 `arrow`。                                                                      
                                                                                
                           |
 | doris.read.arrow-flight-sql.port | -              | Doris FE 的 Arrow Flight 
SQL 端口,当 `doris.read.mode` 为 `arrow` 时,用于通过 Arrow Flight SQL 方式读取数据。服务端配置方式参考 
[基于 Arrow Flight SQL 
的高速数据传输链路](https://doris.apache.org/zh-CN/docs/dev/db-connect/arrow-flight-sql-connect)
 |
 | doris.sink.label.prefix          | spark-doris    | Stream Load 
方式写入时的导入标签前缀。                                                                   
                                                                                
                                       |
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/ecosystem/doris-kafka-connector.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/ecosystem/doris-kafka-connector.md
index 48f1e27a419..d546f9d2d6d 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/ecosystem/doris-kafka-connector.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/ecosystem/doris-kafka-connector.md
@@ -216,7 +216,6 @@ errors.deadletterqueue.topic.replication.factor=1
 | enable.delete               | -                                    | false   
                                                                             | 
N            | 是否同步删除记录,默认 false                                                
                                                                                
                                                                                
                                                                                
               [...]
 | label.prefix                | -                                    | ${name} 
                                                                             | 
N            | Stream load 导入数据时的 label 前缀。默认为 Connector 应用名称。                  
                                                                                
                                                                                
                                                                                
               [...]
 | auto.redirect               | -                                    | true    
                                                                             | 
N            | 是否重定向 StreamLoad 请求。开启后 StreamLoad 将通过 FE 重定向到需要写入数据的 
BE,并且不再显示获取 BE 信息                                                               
                                                                                
                                                                                
                          [...]
-| load.model                  | `stream_load`,<br/> `copy_into`      | 
stream_load                                                                     
     | N            | 导入数据的方式。支持 `stream_load` 直接数据导入到 Doris 中;同时支持 `copy_into` 
的方式导入数据至对象存储中,然后将数据加载至 Doris 中                                                  
                                                                                
                                                                                
                      [...]
 | sink.properties.*           | -                                    | 
`'sink.properties.format':'json'`, 
<br/>`'sink.properties.read_json_by_line':'true'` | N            | Stream Load 
的导入参数。<br />例如:定义列分隔符`'sink.properties.column_separator':','`  <br 
/>详细参数参考[这里](../data-operate/import/import-way/stream-load-manual)。  <br/><br/> 
**开启 Group Commit**,例如开启 sync_mode 模式的 group 
commit:`"sink.properties.group_commit":"sync_mode"`。Group Commit 可以配置 
`off_mode`、`sync_mode`、`async_mode` 三种模式,具体使用 [...]
 | delivery.guarantee          | `at_least_once`,<br/> `exactly_once` | 
at_least_once                                                                   
     | N            | 消费 Kafka 数据导入至 doris 时,数据一致性的保障方式。支持 `at_least_once` 
`exactly_once`,默认为 `at_least_once` 。Doris 需要升级至 2.1.0 以上,才能保障数据的 `exactly_once` 
                                                                                
                                                                                
                           [...]
 | converter.mode              | `normal`,<br/> `debezium_ingestion`  | normal  
                                                                             | 
N            | 使用 Connector 消费 Kafka 数据时,上游数据的类型转换模式。 <br/> ```normal```表示正常消费 
Kafka 中的数据,不经过任何类型转换。 <br/> ```debezium_ingestion```表示当 Kafka 上游的数据通过 Debezium 
等 CDC(Changelog Data Capture,变更数据捕获)工具采集时,上游数据需要经过特殊的类型转换才能支持。                  
                                                                                
                 [...]
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/ecosystem/flink-doris-connector.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/ecosystem/flink-doris-connector.md
index 744cdd77d19..508276e4323 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/ecosystem/flink-doris-connector.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/ecosystem/flink-doris-connector.md
@@ -54,6 +54,7 @@ under the License.
 | 1.6.1             | 1.15,1.16,1.17,1.18,1.19      | 1.0+          | 8        
    | -             |
 | 24.0.1            | 1.15,1.16,1.17,1.18,1.19,1.20 | 1.0+          | 8        
    | -             |
 | 25.0.0            | 1.15,1.16,1.17,1.18,1.19,1.20 | 1.0+ | 8 |- |
+| 25.1.0            | 1.15,1.16,1.17,1.18,1.19,1.20 | 1.0+ | 8 |- |
 
 ## 使用方式
 
@@ -81,7 +82,7 @@ Maven 中使用的时候,可以直接在 Pom 文件中加入如下依赖
 <dependency>
   <groupId>org.apache.doris</groupId>
   <artifactId>flink-doris-connector-1.16</artifactId>
-  <version>25.0.0</version>
+  <version>25.1.0</version>
 </dependency> 
 ```
 
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/ecosystem/spark-doris-connector.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/ecosystem/spark-doris-connector.md
index 5aff14b621e..449c45d6287 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/ecosystem/spark-doris-connector.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/ecosystem/spark-doris-connector.md
@@ -420,12 +420,10 @@ insert into 
your_catalog_name.your_doris_db.your_doris_table select * from your_
 | doris.request.tablet.size        | 1              | 一个 RDD Partition 对应的 
Doris Tablet 个数。<br />此数值设置越小,则会生成越多的 Partition。从而提升 Spark 侧的并行度,但同时会对 Doris 
造成更大的压力。                                                                        
                                 |
 | doris.read.field                 | --             | 读取 Doris 
表的列名列表,多列之间使用逗号分隔                                                               
                                                                                
                                          |
 | doris.batch.size                 | 4064           | 一次从 BE 
读取数据的最大行数。增大此数值可减少 Spark 与 Doris 之间建立连接的次数。<br />从而减轻网络延迟所带来的额外时间开销。            
                                                                                
                                            |
-| doris.exec.mem.limit             | 2147483648     | 单个查询的内存限制。默认为 2GB,单位为字节  
                                                                                
                                                                                
                          |
-| doris.deserialize.arrow.async    | false          | 是否支持异步转换 Arrow 格式到 
spark-doris-connector 迭代所需的 RowBatch                                            
                                                                                
                                |
-| doris.deserialize.queue.size     | 64             | 异步转换 Arrow 格式的内部处理队列,当 
doris.deserialize.arrow.async 为 true 时生效                                        
                                                                                
                            |
+| doris.exec.mem.limit             | 8589934592     | 单个查询的内存限制。默认为 8GB,单位为字节  
                                                                                
                                                                                
                          |
 | doris.write.fields               | --             | 指定写入 Doris 
表的字段或者字段顺序,多列之间使用逗号分隔。<br />默认写入时要按照 Doris 表字段顺序写入全部字段。                         
                                                                                
                                        |
 | doris.sink.batch.size            | 100000         | 单次写 BE 的最大行数             
                                                                                
                                                                                
                          |
-| doris.sink.max-retries           | 0              | 写 BE 失败之后的重试次数,从 1.3.0 
版本开始, 默认值为 0,即默认不进行重试。当设置该参数大于 0 时,会进行批次级别的失败重试,会在 Spark Executor 内存中缓存 
`doris.sink.batch.size` 所配置大小的数据,可能需要适当增大内存分配。                                  
                                    |       
+| doris.sink.max-retries           | 0              | 写 BE 失败之后的重试次数,从 1.3.0 
版本开始,默认值为 0,即默认不进行重试。当设置该参数大于 0 时,会进行批次级别的失败重试,会在 Spark Executor 内存中缓存 
`doris.sink.batch.size` 所配置大小的数据,可能需要适当增大内存分配。                                  
                                    |       
 | doris.sink.properties.format     | csv            | Stream Load 
的数据格式。<br/>共支持 3 种格式:csv,json,arrow <br/> 
[更多参数详情](https://doris.apache.org/zh-CN/docs/data-operate/import/stream-load-manual/)
                                                                        |
 | doris.sink.properties.*          | --             | Stream Load 
的导入参数。<br/>例如:<br/>指定列分隔符:`'doris.sink.properties.column_separator' = 
','`等<br/> 
[更多参数详情](https://doris.apache.org/zh-CN/docs/data-operate/import/stream-load-manual/)
                                 |
 | doris.sink.task.partition.size   | --             | Doris 写入任务对应的 Partition 
个数。Spark RDD 经过过滤等操作,最后写入的 Partition 数可能会比较大,但每个 Partition 
对应的记录数比较少,导致写入频率增加和计算资源浪费。<br/>此数值设置越小,可以降低 Doris 写入频率,减少 Doris 合并压力。该参数配合 
doris.sink.task.use.repartition 使用。                  |
@@ -437,7 +435,6 @@ insert into 
your_catalog_name.your_doris_db.your_doris_table select * from your_
 | doris.https.key-store-path       | -              | Https key store 路径。      
                                                                                
                                                                                
                          |
 | doris.https.key-store-type       | JKS            | Https key store 类型。      
                                                                                
                                                                                
                          |
 | doris.https.key-store-password   | -              | Https key store 密码。      
                                                                                
                                                                                
                          |
-| doris.sink.mode                  | stream_load    | Doris 写入模式,可选项 
`stream_load` 和 `copy_info`。                                                    
                                                                                
                                    |
 | doris.read.mode                  | thrift         | Doris 读取模式,可选项 `thrift` 
和 `arrow`。                                                                      
                                                                                
                           |
 | doris.read.arrow-flight-sql.port | -              | Doris FE 的 Arrow Flight 
SQL 端口,当 `doris.read.mode` 为 `arrow` 时,用于通过 Arrow Flight SQL 方式读取数据。服务端配置方式参考 
[基于 Arrow Flight SQL 
的高速数据传输链路](https://doris.apache.org/zh-CN/docs/dev/db-connect/arrow-flight-sql-connect)
 |
 | doris.sink.label.prefix          | spark-doris    | Stream Load 
方式写入时的导入标签前缀。                                                                   
                                                                                
                                       |
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/ecosystem/doris-kafka-connector.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/ecosystem/doris-kafka-connector.md
index 015d2ce7179..e4990dcd9db 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/ecosystem/doris-kafka-connector.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/ecosystem/doris-kafka-connector.md
@@ -216,7 +216,6 @@ errors.deadletterqueue.topic.replication.factor=1
 | enable.delete               | -                                    | false   
                                                                             | 
N            | 是否同步删除记录,默认 false                                                
                                                                                
                                                                                
                                                                                
               [...]
 | label.prefix                | -                                    | ${name} 
                                                                             | 
N            | Stream load 导入数据时的 label 前缀。默认为 Connector 应用名称。                  
                                                                                
                                                                                
                                                                                
               [...]
 | auto.redirect               | -                                    | true    
                                                                             | 
N            | 是否重定向 StreamLoad 请求。开启后 StreamLoad 将通过 FE 重定向到需要写入数据的 
BE,并且不再显示获取 BE 信息                                                               
                                                                                
                                                                                
                          [...]
-| load.model                  | `stream_load`,<br/> `copy_into`      | 
stream_load                                                                     
     | N            | 导入数据的方式。支持 `stream_load` 直接数据导入到 Doris 中;同时支持 `copy_into` 
的方式导入数据至对象存储中,然后将数据加载至 Doris 中                                                  
                                                                                
                                                                                
                      [...]
 | sink.properties.*           | -                                    | 
`'sink.properties.format':'json'`, 
<br/>`'sink.properties.read_json_by_line':'true'` | N            | Stream Load 
的导入参数。<br />例如:定义列分隔符`'sink.properties.column_separator':','`  <br 
/>详细参数参考[这里](../data-operate/import/import-way/stream-load-manual)。  <br/><br/> 
**开启 Group Commit**,例如开启 sync_mode 模式的 group 
commit:`"sink.properties.group_commit":"sync_mode"`。Group Commit 可以配置 
`off_mode`、`sync_mode`、`async_mode` 三种模式,具体使用 [...]
 | delivery.guarantee          | `at_least_once`,<br/> `exactly_once` | 
at_least_once                                                                   
     | N            | 消费 Kafka 数据导入至 doris 时,数据一致性的保障方式。支持 `at_least_once` 
`exactly_once`,默认为 `at_least_once` 。Doris 需要升级至 2.1.0 以上,才能保障数据的 `exactly_once` 
                                                                                
                                                                                
                           [...]
 | converter.mode              | `normal`,<br/> `debezium_ingestion`  | normal  
                                                                             | 
N            | 使用 Connector 消费 Kafka 数据时,上游数据的类型转换模式。 <br/> ```normal```表示正常消费 
Kafka 中的数据,不经过任何类型转换。 <br/> ```debezium_ingestion```表示当 Kafka 上游的数据通过 Debezium 
等 CDC(Changelog Data Capture,变更数据捕获)工具采集时,上游数据需要经过特殊的类型转换才能支持。                  
                                                                                
                 [...]
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/ecosystem/flink-doris-connector.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/ecosystem/flink-doris-connector.md
index 0663eb00612..3c3dbc2cb0c 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/ecosystem/flink-doris-connector.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/ecosystem/flink-doris-connector.md
@@ -54,6 +54,7 @@ under the License.
 | 1.6.1             | 1.15,1.16,1.17,1.18,1.19      | 1.0+          | 8        
    | -             |
 | 24.0.1            | 1.15,1.16,1.17,1.18,1.19,1.20 | 1.0+          | 8        
    | -             |
 | 25.0.0            | 1.15,1.16,1.17,1.18,1.19,1.20 | 1.0+ | 8 |- |
+| 25.1.0            | 1.15,1.16,1.17,1.18,1.19,1.20 | 1.0+ | 8 |- |
 
 ## 使用方式
 
@@ -81,7 +82,7 @@ Maven 中使用的时候,可以直接在 Pom 文件中加入如下依赖
 <dependency>
   <groupId>org.apache.doris</groupId>
   <artifactId>flink-doris-connector-1.16</artifactId>
-  <version>25.0.0</version>
+  <version>25.1.0</version>
 </dependency> 
 ```
 
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/ecosystem/spark-doris-connector.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/ecosystem/spark-doris-connector.md
index 5aff14b621e..449c45d6287 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/ecosystem/spark-doris-connector.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/ecosystem/spark-doris-connector.md
@@ -420,12 +420,10 @@ insert into 
your_catalog_name.your_doris_db.your_doris_table select * from your_
 | doris.request.tablet.size        | 1              | 一个 RDD Partition 对应的 
Doris Tablet 个数。<br />此数值设置越小,则会生成越多的 Partition。从而提升 Spark 侧的并行度,但同时会对 Doris 
造成更大的压力。                                                                        
                                 |
 | doris.read.field                 | --             | 读取 Doris 
表的列名列表,多列之间使用逗号分隔                                                               
                                                                                
                                          |
 | doris.batch.size                 | 4064           | 一次从 BE 
读取数据的最大行数。增大此数值可减少 Spark 与 Doris 之间建立连接的次数。<br />从而减轻网络延迟所带来的额外时间开销。            
                                                                                
                                            |
-| doris.exec.mem.limit             | 2147483648     | 单个查询的内存限制。默认为 2GB,单位为字节  
                                                                                
                                                                                
                          |
-| doris.deserialize.arrow.async    | false          | 是否支持异步转换 Arrow 格式到 
spark-doris-connector 迭代所需的 RowBatch                                            
                                                                                
                                |
-| doris.deserialize.queue.size     | 64             | 异步转换 Arrow 格式的内部处理队列,当 
doris.deserialize.arrow.async 为 true 时生效                                        
                                                                                
                            |
+| doris.exec.mem.limit             | 8589934592     | 单个查询的内存限制。默认为 8GB,单位为字节  
                                                                                
                                                                                
                          |
 | doris.write.fields               | --             | 指定写入 Doris 
表的字段或者字段顺序,多列之间使用逗号分隔。<br />默认写入时要按照 Doris 表字段顺序写入全部字段。                         
                                                                                
                                        |
 | doris.sink.batch.size            | 100000         | 单次写 BE 的最大行数             
                                                                                
                                                                                
                          |
-| doris.sink.max-retries           | 0              | 写 BE 失败之后的重试次数,从 1.3.0 
版本开始, 默认值为 0,即默认不进行重试。当设置该参数大于 0 时,会进行批次级别的失败重试,会在 Spark Executor 内存中缓存 
`doris.sink.batch.size` 所配置大小的数据,可能需要适当增大内存分配。                                  
                                    |       
+| doris.sink.max-retries           | 0              | 写 BE 失败之后的重试次数,从 1.3.0 
版本开始,默认值为 0,即默认不进行重试。当设置该参数大于 0 时,会进行批次级别的失败重试,会在 Spark Executor 内存中缓存 
`doris.sink.batch.size` 所配置大小的数据,可能需要适当增大内存分配。                                  
                                    |       
 | doris.sink.properties.format     | csv            | Stream Load 
的数据格式。<br/>共支持 3 种格式:csv,json,arrow <br/> 
[更多参数详情](https://doris.apache.org/zh-CN/docs/data-operate/import/stream-load-manual/)
                                                                        |
 | doris.sink.properties.*          | --             | Stream Load 
的导入参数。<br/>例如:<br/>指定列分隔符:`'doris.sink.properties.column_separator' = 
','`等<br/> 
[更多参数详情](https://doris.apache.org/zh-CN/docs/data-operate/import/stream-load-manual/)
                                 |
 | doris.sink.task.partition.size   | --             | Doris 写入任务对应的 Partition 
个数。Spark RDD 经过过滤等操作,最后写入的 Partition 数可能会比较大,但每个 Partition 
对应的记录数比较少,导致写入频率增加和计算资源浪费。<br/>此数值设置越小,可以降低 Doris 写入频率,减少 Doris 合并压力。该参数配合 
doris.sink.task.use.repartition 使用。                  |
@@ -437,7 +435,6 @@ insert into 
your_catalog_name.your_doris_db.your_doris_table select * from your_
 | doris.https.key-store-path       | -              | Https key store 路径。      
                                                                                
                                                                                
                          |
 | doris.https.key-store-type       | JKS            | Https key store 类型。      
                                                                                
                                                                                
                          |
 | doris.https.key-store-password   | -              | Https key store 密码。      
                                                                                
                                                                                
                          |
-| doris.sink.mode                  | stream_load    | Doris 写入模式,可选项 
`stream_load` 和 `copy_info`。                                                    
                                                                                
                                    |
 | doris.read.mode                  | thrift         | Doris 读取模式,可选项 `thrift` 
和 `arrow`。                                                                      
                                                                                
                           |
 | doris.read.arrow-flight-sql.port | -              | Doris FE 的 Arrow Flight 
SQL 端口,当 `doris.read.mode` 为 `arrow` 时,用于通过 Arrow Flight SQL 方式读取数据。服务端配置方式参考 
[基于 Arrow Flight SQL 
的高速数据传输链路](https://doris.apache.org/zh-CN/docs/dev/db-connect/arrow-flight-sql-connect)
 |
 | doris.sink.label.prefix          | spark-doris    | Stream Load 
方式写入时的导入标签前缀。                                                                   
                                                                                
                                       |
diff --git a/versioned_docs/version-2.1/ecosystem/doris-kafka-connector.md 
b/versioned_docs/version-2.1/ecosystem/doris-kafka-connector.md
index 414b598a3b4..494d2062ff5 100644
--- a/versioned_docs/version-2.1/ecosystem/doris-kafka-connector.md
+++ b/versioned_docs/version-2.1/ecosystem/doris-kafka-connector.md
@@ -215,7 +215,6 @@ errors.deadletterqueue.topic.replication.factor=1
 | enable.delete               | -                                    | false   
                                                                             | 
N            | Whether to delete records synchronously, default false           
                                                                                
                                                                                
                                                                                
               [...]
 | label.prefix                | -                                    | ${name} 
                                                                             | 
N            | Stream load label prefix when importing data. Defaults to the 
Connector application name.                                                     
                                                                                
                                                                                
                  [...]
 | auto.redirect               | -                                    | true    
                                                                             | 
N            | Whether to redirect StreamLoad requests. After being turned on, 
StreamLoad will redirect to the BE where data needs to be written through FE, 
and the BE information will no longer be displayed.                             
                                                                                
                  [...]
-| load.model                  | `stream_load`,<br/> `copy_into`      | 
stream_load                                                                     
     | N            | How to import data. Supports `stream_load` to directly 
import data into Doris; also supports `copy_into` to import data into object 
storage, and then load the data into Doris.                                     
                                                                                
                            [...]
 | sink.properties.*           | -                                    | 
`'sink.properties.format':'json'`, 
<br/>`'sink.properties.read_json_by_line':'true'` | N            | Import 
parameters for Stream Load. <br />For example: define column separator 
`'sink.properties.column_separator':','` <br />Detailed parameter reference 
[here](https://doris.apache.org/docs/data-operate/import/stream-load-manual)  
<br/><br/> **Enable Group Commit**, for example, enable group commit in 
sync_mode mode: [...]
 | delivery.guarantee          | `at_least_once`,<br/> `exactly_once` | 
at_least_once                                                                   
     | N            | How to ensure data consistency when consuming Kafka data 
is imported into Doris. Supports `at_least_once` `exactly_once`, default is 
`at_least_once`. Doris needs to be upgraded to 2.1.0 or above to ensure data 
`exactly_once`                                                                  
                              [...]
 | converter.mode              | `normal`,<br/> `debezium_ingestion`  | normal  
                                                                             | 
N            | Type conversion mode of upstream data when using Connector to 
consume Kafka data. <br/> ```normal``` means consuming data in Kafka normally 
without any type conversion. <br/> ```debezium_ingestion``` means that when 
Kafka upstream data is collected through CDC (Changelog Data Capture) tools 
such as Debezium, the upstr [...]
diff --git a/versioned_docs/version-2.1/ecosystem/flink-doris-connector.md 
b/versioned_docs/version-2.1/ecosystem/flink-doris-connector.md
index c85a10defbe..9f7f7820724 100644
--- a/versioned_docs/version-2.1/ecosystem/flink-doris-connector.md
+++ b/versioned_docs/version-2.1/ecosystem/flink-doris-connector.md
@@ -51,6 +51,7 @@ Using the Flink Connector, you can perform the following 
operations:
 | 24.0.1            | 1.15,1.16,1.17,1.18,1.19,1.20 | 1.0+          | 8        
    | -             |
 | 24.1.0            | 1.15,1.16,1.17,1.18,1.19,1.20 | 1.0+          | 8        
    | -             |
 | 25.0.0            | 1.15,1.16,1.17,1.18,1.19,1.20 | 1.0+          | 8        
    | -             |
+| 25.1.0            | 1.15,1.16,1.17,1.18,1.19,1.20 | 1.0+ | 8 |- |
 
 ## Usage
 
@@ -78,7 +79,7 @@ For example:
 <dependency>
   <groupId>org.apache.doris</groupId>
   <artifactId>flink-doris-connector-1.16</artifactId>
-  <version>25.0.0</version>
+  <version>25.1.0</version>
 </dependency> 
 ```
 
diff --git a/versioned_docs/version-2.1/ecosystem/spark-doris-connector.md 
b/versioned_docs/version-2.1/ecosystem/spark-doris-connector.md
index 1e21bffde8e..4e0660fac0d 100644
--- a/versioned_docs/version-2.1/ecosystem/spark-doris-connector.md
+++ b/versioned_docs/version-2.1/ecosystem/spark-doris-connector.md
@@ -419,9 +419,7 @@ insert into 
your_catalog_name.your_doris_db.your_doris_table select * from your_
 | doris.request.tablet.size        | 1             | The number of Doris 
Tablets corresponding to an RDD Partition. The smaller this value is set, the 
more partitions will be generated. This will increase the parallelism on the 
Spark side, but at the same time will cause greater pressure on Doris.          
                                                                                
                                                                                
                         [...]
 | doris.read.field                 | --            | List of column names in 
the Doris table, separated by commas                                            
                                                                                
                                                                                
                                                                                
                                                                                
                [...]
 | doris.batch.size                 | 4064          | The maximum number of 
rows to read data from BE at one time. Increasing this value can reduce the 
number of connections between Spark and Doris. Thereby reducing the extra time 
overhead caused by network delay.                                               
                                                                                
                                                                                
                       [...]
-| doris.exec.mem.limit             | 2147483648    | Memory limit for a single 
query. The default is 2GB, in bytes.                                            
                                                                                
                                                                                
                                                                                
                                                                                
              [...]
-| doris.deserialize.arrow.async    | false         | Whether to support 
asynchronous conversion of Arrow format to RowBatch required for 
spark-doris-connector iteration                                                 
                                                                                
                                                                                
                                                                                
                                    [...]
-| doris.deserialize.queue.size     | 64            | Asynchronous conversion 
of the internal processing queue in Arrow format takes effect when 
doris.deserialize.arrow.async is true                                           
                                                                                
                                                                                
                                                                                
                             [...]
+| doris.exec.mem.limit             | 8589934592    | Memory limit for a single 
query. The default is 8GB, in bytes.                                            
                                                                                
                                                                                
                                                                                
                                                                                
              [...]
 | doris.write.fields               | --            | Specifies the fields (or 
the order of the fields) to write to the Doris table, fileds separated by 
commas.<br/>By default, all fields are written in the order of Doris table 
fields.                                                                         
                                                                                
                                                                                
                          [...]
 | doris.sink.batch.size            | 100000        | Maximum number of lines 
in a single write BE                                                            
                                                                                
                                                                                
                                                                                
                                                                                
                [...]
 | doris.sink.max-retries           | 0             | Number of retries after 
writing BE, Since version 1.3.0, the default value is 0, which means no retries 
are performed by default. When this parameter is set greater than 0, 
batch-level failure retries will be performed, and data of the configured size 
of `doris.sink.batch.size` will be cached in the Spark Executor memory. The 
memory allocation may need to be appropriately increased.                       
                                [...]
@@ -436,7 +434,6 @@ insert into 
your_catalog_name.your_doris_db.your_doris_table select * from your_
 | doris.https.key-store-path       | -             | Https key store path.     
                                                                                
                                                                                
                                                                                
                                                                                
                                                                                
              [...]
 | doris.https.key-store-type       | JKS           | Https key store type.     
                                                                                
                                                                                
                                                                                
                                                                                
                                                                                
              [...]
 | doris.https.key-store-password   | -             | Https key store password. 
                                                                                
                                                                                
                                                                                
                                                                                
                                                                                
              [...]
-| doris.sink.mode                  | stream_load   | Doris sink mode, with 
optional `stream_load` and `copy_into`.                                         
                                                                                
                                                                                
                                                                                
                                                                                
                  [...]
 | doris.read.mode                  | thrift        | Doris read mode, with 
optional `thrift` and `arrow`.                                                  
                                                                                
                                                                                
                                                                                
                                                                                
                  [...]
 | doris.read.arrow-flight-sql.port | -             | Arrow Flight SQL port of 
Doris FE. When `doris.read.mode` is `arrow`, it is used to read data via Arrow 
Flight SQL. For server configuration, see [High-speed data transmission link 
based on Arrow Flight 
SQL](https://doris.apache.org/zh-CN/docs/dev/db-connect/arrow-flight-sql-connect)
                                                                                
                                                                            
[...]
 | doris.sink.label.prefix          | spark-doris   | The import label prefix 
when writing in Stream Load mode.                                               
                                                                                
                                                                                
                                                                                
                                                                                
                [...]
diff --git a/versioned_docs/version-3.0/ecosystem/doris-kafka-connector.md 
b/versioned_docs/version-3.0/ecosystem/doris-kafka-connector.md
index 98b9c791f72..03b81468b26 100644
--- a/versioned_docs/version-3.0/ecosystem/doris-kafka-connector.md
+++ b/versioned_docs/version-3.0/ecosystem/doris-kafka-connector.md
@@ -215,7 +215,6 @@ errors.deadletterqueue.topic.replication.factor=1
 | enable.delete               | -                                    | false   
                                                                             | 
N            | Whether to delete records synchronously, default false           
                                                                                
                                                                                
                                                                                
               [...]
 | label.prefix                | -                                    | ${name} 
                                                                             | 
N            | Stream load label prefix when importing data. Defaults to the 
Connector application name.                                                     
                                                                                
                                                                                
                  [...]
 | auto.redirect               | -                                    | true    
                                                                             | 
N            | Whether to redirect StreamLoad requests. After being turned on, 
StreamLoad will redirect to the BE where data needs to be written through FE, 
and the BE information will no longer be displayed.                             
                                                                                
                  [...]
-| load.model                  | `stream_load`,<br/> `copy_into`      | 
stream_load                                                                     
     | N            | How to import data. Supports `stream_load` to directly 
import data into Doris; also supports `copy_into` to import data into object 
storage, and then load the data into Doris.                                     
                                                                                
                            [...]
 | sink.properties.*           | -                                    | 
`'sink.properties.format':'json'`, 
<br/>`'sink.properties.read_json_by_line':'true'` | N            | Import 
parameters for Stream Load. <br />For example: define column separator 
`'sink.properties.column_separator':','` <br />Detailed parameter reference 
[here](https://doris.apache.org/docs/data-operate/import/stream-load-manual)  
<br/><br/> **Enable Group Commit**, for example, enable group commit in 
sync_mode mode: [...]
 | delivery.guarantee          | `at_least_once`,<br/> `exactly_once` | 
at_least_once                                                                   
     | N            | How to ensure data consistency when consuming Kafka data 
is imported into Doris. Supports `at_least_once` `exactly_once`, default is 
`at_least_once`. Doris needs to be upgraded to 2.1.0 or above to ensure data 
`exactly_once`                                                                  
                              [...]
 | converter.mode              | `normal`,<br/> `debezium_ingestion`  | normal  
                                                                             | 
N            | Type conversion mode of upstream data when using Connector to 
consume Kafka data. <br/> ```normal``` means consuming data in Kafka normally 
without any type conversion. <br/> ```debezium_ingestion``` means that when 
Kafka upstream data is collected through CDC (Changelog Data Capture) tools 
such as Debezium, the upstr [...]
diff --git a/versioned_docs/version-3.0/ecosystem/flink-doris-connector.md 
b/versioned_docs/version-3.0/ecosystem/flink-doris-connector.md
index c85a10defbe..9f7f7820724 100644
--- a/versioned_docs/version-3.0/ecosystem/flink-doris-connector.md
+++ b/versioned_docs/version-3.0/ecosystem/flink-doris-connector.md
@@ -51,6 +51,7 @@ Using the Flink Connector, you can perform the following 
operations:
 | 24.0.1            | 1.15,1.16,1.17,1.18,1.19,1.20 | 1.0+          | 8        
    | -             |
 | 24.1.0            | 1.15,1.16,1.17,1.18,1.19,1.20 | 1.0+          | 8        
    | -             |
 | 25.0.0            | 1.15,1.16,1.17,1.18,1.19,1.20 | 1.0+          | 8        
    | -             |
+| 25.1.0            | 1.15,1.16,1.17,1.18,1.19,1.20 | 1.0+ | 8 |- |
 
 ## Usage
 
@@ -78,7 +79,7 @@ For example:
 <dependency>
   <groupId>org.apache.doris</groupId>
   <artifactId>flink-doris-connector-1.16</artifactId>
-  <version>25.0.0</version>
+  <version>25.1.0</version>
 </dependency> 
 ```
 
diff --git a/versioned_docs/version-3.0/ecosystem/spark-doris-connector.md 
b/versioned_docs/version-3.0/ecosystem/spark-doris-connector.md
index 1e21bffde8e..4e0660fac0d 100644
--- a/versioned_docs/version-3.0/ecosystem/spark-doris-connector.md
+++ b/versioned_docs/version-3.0/ecosystem/spark-doris-connector.md
@@ -419,9 +419,7 @@ insert into 
your_catalog_name.your_doris_db.your_doris_table select * from your_
 | doris.request.tablet.size        | 1             | The number of Doris 
Tablets corresponding to an RDD Partition. The smaller this value is set, the 
more partitions will be generated. This will increase the parallelism on the 
Spark side, but at the same time will cause greater pressure on Doris.          
                                                                                
                                                                                
                         [...]
 | doris.read.field                 | --            | List of column names in 
the Doris table, separated by commas                                            
                                                                                
                                                                                
                                                                                
                                                                                
                [...]
 | doris.batch.size                 | 4064          | The maximum number of 
rows to read data from BE at one time. Increasing this value can reduce the 
number of connections between Spark and Doris. Thereby reducing the extra time 
overhead caused by network delay.                                               
                                                                                
                                                                                
                       [...]
-| doris.exec.mem.limit             | 2147483648    | Memory limit for a single 
query. The default is 2GB, in bytes.                                            
                                                                                
                                                                                
                                                                                
                                                                                
              [...]
-| doris.deserialize.arrow.async    | false         | Whether to support 
asynchronous conversion of Arrow format to RowBatch required for 
spark-doris-connector iteration                                                 
                                                                                
                                                                                
                                                                                
                                    [...]
-| doris.deserialize.queue.size     | 64            | Asynchronous conversion 
of the internal processing queue in Arrow format takes effect when 
doris.deserialize.arrow.async is true                                           
                                                                                
                                                                                
                                                                                
                             [...]
+| doris.exec.mem.limit             | 8589934592    | Memory limit for a single 
query. The default is 8GB, in bytes.                                            
                                                                                
                                                                                
                                                                                
                                                                                
              [...]
 | doris.write.fields               | --            | Specifies the fields (or 
the order of the fields) to write to the Doris table, fileds separated by 
commas.<br/>By default, all fields are written in the order of Doris table 
fields.                                                                         
                                                                                
                                                                                
                          [...]
 | doris.sink.batch.size            | 100000        | Maximum number of lines 
in a single write BE                                                            
                                                                                
                                                                                
                                                                                
                                                                                
                [...]
 | doris.sink.max-retries           | 0             | Number of retries after 
writing BE, Since version 1.3.0, the default value is 0, which means no retries 
are performed by default. When this parameter is set greater than 0, 
batch-level failure retries will be performed, and data of the configured size 
of `doris.sink.batch.size` will be cached in the Spark Executor memory. The 
memory allocation may need to be appropriately increased.                       
                                [...]
@@ -436,7 +434,6 @@ insert into 
your_catalog_name.your_doris_db.your_doris_table select * from your_
 | doris.https.key-store-path       | -             | Https key store path.     
                                                                                
                                                                                
                                                                                
                                                                                
                                                                                
              [...]
 | doris.https.key-store-type       | JKS           | Https key store type.     
                                                                                
                                                                                
                                                                                
                                                                                
                                                                                
              [...]
 | doris.https.key-store-password   | -             | Https key store password. 
                                                                                
                                                                                
                                                                                
                                                                                
                                                                                
              [...]
-| doris.sink.mode                  | stream_load   | Doris sink mode, with 
optional `stream_load` and `copy_into`.                                         
                                                                                
                                                                                
                                                                                
                                                                                
                  [...]
 | doris.read.mode                  | thrift        | Doris read mode, with 
optional `thrift` and `arrow`.                                                  
                                                                                
                                                                                
                                                                                
                                                                                
                  [...]
 | doris.read.arrow-flight-sql.port | -             | Arrow Flight SQL port of 
Doris FE. When `doris.read.mode` is `arrow`, it is used to read data via Arrow 
Flight SQL. For server configuration, see [High-speed data transmission link 
based on Arrow Flight 
SQL](https://doris.apache.org/zh-CN/docs/dev/db-connect/arrow-flight-sql-connect)
                                                                                
                                                                            
[...]
 | doris.sink.label.prefix          | spark-doris   | The import label prefix 
when writing in Stream Load mode.                                               
                                                                                
                                                                                
                                                                                
                                                                                
                [...]


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org
For additional commands, e-mail: commits-h...@doris.apache.org

Reply via email to