This is an automated email from the ASF dual-hosted git repository.

liaoxin pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/doris-website.git


The following commit(s) were added to refs/heads/master by this push:
     new 9f1e77f3a8d [doc](load) fix some error and refactor load doc (#1796)
9f1e77f3a8d is described below

commit 9f1e77f3a8d866ec4453978089a6795b2ca3f3a9
Author: hui lai <1353307...@qq.com>
AuthorDate: Wed Jan 15 19:22:22 2025 +0800

    [doc](load) fix some error and refactor load doc (#1796)
---
 .../import/import-way/routine-load-manual.md       | 294 ++++++---------------
 .../import/import-way/stream-load-manual.md        |  42 ---
 .../import/import-way/routine-load-manual.md       | 289 ++++++--------------
 .../import/import-way/stream-load-manual.md        |  49 +---
 .../current/ecosystem/doris-streamloader.md        |   6 -
 .../version-1.2/ecosystem/doris-streamloader.md    |   6 -
 .../data-operate/import/stream-load-manual.md      |  49 +---
 .../version-2.0/ecosystem/doris-streamloader.md    |   6 -
 .../import/import-way/routine-load-manual.md       | 289 ++++++--------------
 .../import/import-way/stream-load-manual.md        |   6 +-
 .../version-2.1/ecosystem/doris-streamloader.md    |   6 -
 .../import/import-way/routine-load-manual.md       | 289 ++++++--------------
 .../import/import-way/stream-load-manual.md        |   6 +-
 .../version-3.0/ecosystem/doris-streamloader.md    |   5 -
 .../data-operate/import/stream-load-manual.md      |  68 -----
 .../import/import-way/routine-load-manual.md       | 294 ++++++---------------
 .../import/import-way/routine-load-manual.md       | 294 ++++++---------------
 17 files changed, 536 insertions(+), 1462 deletions(-)

diff --git a/docs/data-operate/import/import-way/routine-load-manual.md 
b/docs/data-operate/import/import-way/routine-load-manual.md
index b05c1897f1d..b3ba21d2c76 100644
--- a/docs/data-operate/import/import-way/routine-load-manual.md
+++ b/docs/data-operate/import/import-way/routine-load-manual.md
@@ -311,65 +311,19 @@ The modules for creating a loading job are explained as 
follows:
 
 **01 FE Configuration Parameters**
 
-**max_routine_load_task_concurrent_num**
-
-- Default Value: 256
-
-- Dynamic Configuration: Yes
-
-- FE Master Exclusive: Yes
-
-- Parameter Description: Limits the maximum number of concurrent subtasks for 
Routine Load jobs. It is recommended to keep it at the default value. Setting 
it too high may result in excessive concurrent tasks and resource consumption.
-
-**max_routine_load_task_num_per_be**
-
-- Default Value: 1024
-
-- Dynamic Configuration: Yes
-
-- FE Master Exclusive: Yes
-
-- Parameter Description: Limits the maximum number of concurrent Routine Load 
tasks per backend (BE). `max_routine_load_task_num_per_be` should be smaller 
than the `routine_load_thread_pool_size` parameter.
-
-**max_routine_load_job_num**
-
-- Default Value: 100
-
-- Dynamic Configuration: Yes
-
-- FE Master Exclusive: Yes
-
-- Parameter Description: Limits the maximum number of Routine Load jobs, 
including those in NEED_SCHEDULED, RUNNING, and PAUSE states.
-
-**max_tolerable_backend_down_num**
-
-- Default Value: 0
-
-- Dynamic Configuration: Yes
-
-- FE Master Exclusive: Yes
-
-- Parameter Description: If any BE goes down, Routine Load cannot 
automatically recover. Under certain conditions, Doris can reschedule PAUSED 
tasks and transition them to the RUNNING state. Setting this parameter to 0 
means that re-scheduling is only allowed when all BE nodes are in the alive 
state.
-
-**period_of_auto_resume_min**
-
-- Default Value: 5 (minutes)
-
-- Dynamic Configuration: Yes
-
-- FE Master Exclusive: Yes
-
-- Parameter Description: The period for automatically resuming Routine Load.
+| Parameter Name                          | Default Value | Dynamic 
Configuration | FE Master Exclusive Configuration | Description                 
                                                                    |
+|-----------------------------------------|---------------|-----------------------|----------------------------------|-------------------------------------------------------------------------------------------------|
+| max_routine_load_task_concurrent_num   | 256           | Yes                 
  | Yes                              | Limits the maximum number of concurrent 
subtasks for Routine Load jobs. It is recommended to maintain the default 
value. If set too high, it may lead to excessive concurrent tasks, consuming 
cluster resources. |
+| max_routine_load_task_num_per_be       | 1024          | Yes                 
  | Yes                              | The maximum number of concurrent Routine 
Load tasks allowed per BE. `max_routine_load_task_num_per_be` should be less 
than `routine_load_thread_pool_size`. |
+| max_routine_load_job_num                | 100           | Yes                
   | Yes                              | Limits the maximum number of Routine 
Load jobs, including NEED_SCHEDULED, RUNNING, and PAUSE. |
+| max_tolerable_backend_down_num          | 0             | Yes                
   | Yes                              | If any BE is down, Routine Load cannot 
automatically recover. Under certain conditions, Doris can reschedule PAUSED 
tasks to RUNNING state. A value of 0 means that rescheduling is only allowed 
when all BE nodes are alive. |
+| period_of_auto_resume_min               | 5 (minutes)   | Yes                
   | Yes                              | The period for automatically resuming 
Routine Load. |
 
 **02 BE Configuration Parameters**
 
-**max_consumer_num_per_group**
-
-- Default Value: 3
-
-- Dynamic Configuration: Yes
-
-- Description: Specifies the maximum number of consumers generated per 
subtask. For Kafka data sources, a consumer can consume one or multiple Kafka 
partitions. For example, if a task needs to consume 6 Kafka partitions, it will 
generate 3 consumers, with each consumer consuming 2 partitions. If there are 
only 2 partitions, it will generate 2 consumers, with each consumer consuming 1 
partition.
+| Parameter Name                     | Default Value | Dynamic Configuration | 
Description                                                                     
                                      |
+|------------------------------------|---------------|-----------------------|-----------------------------------------------------------------------------------------------------------------------|
+| max_consumer_num_per_group         | 3             | Yes                   | 
The maximum number of consumers that can be generated for a subtask to consume 
data. For Kafka data sources, a consumer may consume one or more Kafka 
partitions. If a task needs to consume 6 Kafka partitions, it will generate 3 
consumers, each consuming 2 partitions. If there are only 2 partitions, it will 
generate only 2 consumers, each consuming 1 partition. |
 
 ### Load Configuration Parameters
 
@@ -1637,171 +1591,97 @@ The columns in the result set provide the following 
information:
 
 ### Kafka Security Authentication
 
-**Loading Kafka data with SSL authentication**
+**Loading Kafka Data with SSL Authentication**
 
-1. Loading sample data:
+Example load command:
 
-    ```sql
-    { "id" : 1, "name" : "Benjamin", "age":18 }
-    { "id" : 2, "name" : "Emily", "age":20 }
-    { "id" : 3, "name" : "Alexander", "age":22 }
-    ```
-
-2. Create table:
-
-    ```sql
-    CREATE TABLE demo.routine_test20 (
-        id      INT            NOT NULL  COMMENT "id",
-        name    VARCHAR(30)    NOT NULL  COMMENT "name",
-        age     INT                      COMMENT "age"
-    )
-    DUPLICATE KEY(`id`)
-    DISTRIBUTED BY HASH(`id`) BUCKETS 1;
-    ```
-
-3. Load command:
-
-    ```sql
-    CREATE ROUTINE LOAD demo.kafka_job20 ON routine_test20
-            PROPERTIES
-            (
-                "format" = "json"
-            )
-            FROM KAFKA
-            (
-                "kafka_broker_list" = "192.168.100.129:9092",
-                "kafka_topic" = "routineLoad21",
-                "property.security.protocol" = "ssl",
-                "property.ssl.ca.location" = "FILE:ca.pem",
-                "property.ssl.certificate.location" = "FILE:client.pem",
-                "property.ssl.key.location" = "FILE:client.key",
-                "property.ssl.key.password" = "ssl_passwd"
-            );  
-    ```
-
-4. Load result:
-
-    ```sql
-    mysql> select * from routine_test20;
-    +------+----------------+------+
-    | id   | name           | age  |
-    +------+----------------+------+
-    |    1 | Benjamin       |   18 |
-    |    2 | Emily          |   20 |
-    |    3 | Alexander      |   22 |
-    +------+----------------+------+
-    3 rows in set (0.01 sec)
-    ```
-
-**Loading Kafka data with Kerberos authentication**
-
-1. Loading sample data:
-
-    ```sql
-    { "id" : 1, "name" : "Benjamin", "age":18 }
-    { "id" : 2, "name" : "Emily", "age":20 }
-    { "id" : 3, "name" : "Alexander", "age":22 }
-    ```
+```SQL
+CREATE ROUTINE LOAD demo.kafka_job20 ON routine_test20
+        PROPERTIES
+        (
+            "format" = "json"
+        )
+        FROM KAFKA
+        (
+            "kafka_broker_list" = "192.168.100.129:9092",
+            "kafka_topic" = "routineLoad21",
+            "property.security.protocol" = "ssl",
+            "property.ssl.ca.location" = "FILE:ca.pem",
+            "property.ssl.certificate.location" = "FILE:client.pem",
+            "property.ssl.key.location" = "FILE:client.key",
+            "property.ssl.key.password" = "ssl_passwd"
+        );  
+```
 
-2. Create table:
+Parameter descriptions:
 
-    ```sql
-    CREATE TABLE demo.routine_test21 (
-        id      INT            NOT NULL  COMMENT "id",
-        name    VARCHAR(30)    NOT NULL  COMMENT "name",
-        age     INT                      COMMENT "age"
-    )
-    DUPLICATE KEY(`id`)
-    DISTRIBUTED BY HASH(`id`) BUCKETS 1;
-    ```
+| Parameter                          | Description                             
                     |
+|------------------------------------|--------------------------------------------------------------|
+| property.security.protocol         | The security protocol used, in this 
example it is SSL       |
+| property.ssl.ca.location           | The location of the CA (Certificate 
Authority) certificate   |
+| property.ssl.certificate.location  | The location of the Client's public key 
(required if client authentication is enabled on the Kafka server) |
+| property.ssl.key.location          | The location of the Client's private 
key (required if client authentication is enabled on the Kafka server) |
+| property.ssl.key.password          | The password for the Client's private 
key (required if client authentication is enabled on the Kafka server) |
 
-3. Load command:
+**Loading Kafka Data with Kerberos Authentication**
 
-    ```sql
-    CREATE ROUTINE LOAD demo.kafka_job21 ON routine_test21
-    PROPERTIES
-    (
-        "format" = "json"
-    )
-    FROM KAFKA
-    (
-        "kafka_broker_list" = "192.168.100.129:9092",
-        "kafka_topic" = "routineLoad21",
-        "property.security.protocol" = "SASL_PLAINTEXT",
-        "property.sasl.kerberos.service.name" = "kafka",
-        "property.sasl.kerberos.keytab"="/path/to/kafka_client.keytab",
-        "property.sasl.kerberos.principal" = 
"clients/stream.dt.lo...@example.com"
-    );  
-    ```
-
-4. Load result:
-
-    ```sql
-    mysql> select * from routine_test21;
-    +------+----------------+------+
-    | id   | name           | age  |
-    +------+----------------+------+
-    |    1 | Benjamin       |   18 |
-    |    2 | Emily          |   20 |
-    |    3 | Alexander      |   22 |
-    +------+----------------+------+
-    3 rows in set (0.01 sec)
-    ```
+Example load command:
 
-**Loading Kafka data with PLAIN authentication in Kafka cluster**
-
-1. Loading sample data:
+```SQL
+CREATE ROUTINE LOAD demo.kafka_job21 ON routine_test21
+        PROPERTIES
+        (
+            "format" = "json"
+        )
+        FROM KAFKA
+        (
+            "kafka_broker_list" = "192.168.100.129:9092",
+            "kafka_topic" = "routineLoad21",
+            "property.security.protocol" = "SASL_PLAINTEXT",
+            "property.sasl.kerberos.service.name" = "kafka",
+            
"property.sasl.kerberos.keytab"="/opt/third/kafka/kerberos/kafka_client.keytab",
+            "property.sasl.kerberos.principal" = 
"clients/stream.dt.lo...@example.com"
+        );  
+```
 
-    ```sql
-    { "id" : 1, "name" : "Benjamin", "age":18 }
-    { "id" : 2, "name" : "Emily", "age":20 }
-    { "id" : 3, "name" : "Alexander", "age":22 }
-    ```
+Parameter descriptions:
 
-2. Create table:
+| Parameter                           | Description                            
                   |
+|-------------------------------------|-----------------------------------------------------------|
+| property.security.protocol          | The security protocol used, in this 
example it is SASL_PLAINTEXT |
+| property.sasl.kerberos.service.name | Specifies the broker service name, 
default is Kafka       |
+| property.sasl.kerberos.keytab       | The location of the keytab file        
                   |
+| property.sasl.kerberos.principal    | Specifies the Kerberos principal       
                   |
 
-    ```sql
-    CREATE TABLE demo.routine_test22 (
-        id      INT            NOT NULL  COMMENT "id",
-        name    VARCHAR(30)    NOT NULL  COMMENT "name",
-        age     INT                      COMMENT "age"
-    )
-    DUPLICATE KEY(`id`)
-    DISTRIBUTED BY HASH(`id`) BUCKETS 1;
-    ```
+**Loading Kafka Cluster with PLAIN Authentication**
 
-3. Load command:
+1. Example load command:
 
-    ```sql
-    CREATE ROUTINE LOAD demo.kafka_job22 ON routine_test22
-            PROPERTIES
-            (
-                "format" = "json"
-            )
-            FROM KAFKA
-            (
-                "kafka_broker_list" = "192.168.100.129:9092",
-                "kafka_topic" = "routineLoad22",
-                "property.security.protocol"="SASL_PLAINTEXT",
-                "property.sasl.mechanism"="PLAIN",
-                "property.sasl.username"="admin",
-                "property.sasl.password"="admin"
-            );  
-    ```
+```SQL
+CREATE ROUTINE LOAD demo.kafka_job22 ON routine_test22
+        PROPERTIES
+        (
+            "format" = "json"
+        )
+        FROM KAFKA
+        (
+            "kafka_broker_list" = "192.168.100.129:9092",
+            "kafka_topic" = "routineLoad22",
+            "property.security.protocol"="SASL_PLAINTEXT",
+            "property.sasl.mechanism"="PLAIN",
+            "property.sasl.username"="admin",
+            "property.sasl.password"="admin"
+        );  
+```
 
-4. Load result
+Parameter descriptions:
 
-    ```sql
-    mysql> select * from routine_test22;
-    +------+----------------+------+
-    | id   | name           | age  |
-    +------+----------------+------+
-    |    1 | Benjamin       |   18 |
-    |    2 | Emily          |   20 |
-    |    3 | Alexander      |   22 |
-    +------+----------------+------+
-    3 rows in set (0.02 sec)
-    ```
+| Parameter                          | Description                             
                  |
+|------------------------------------|-----------------------------------------------------------|
+| property.security.protocol         | The security protocol used, in this 
example it is SASL_PLAINTEXT |
+| property.sasl.mechanism           | Specifies the SASL authentication 
mechanism as PLAIN      |
+| property.sasl.username            | The username for SASL                    
                |
+| property.sasl.password            | The password for SASL                    
                |
 
 ### Single-task Loading to Multiple Tables  
 
diff --git a/docs/data-operate/import/import-way/stream-load-manual.md 
b/docs/data-operate/import/import-way/stream-load-manual.md
index 3f16dc2c6ac..ce931c933df 100644
--- a/docs/data-operate/import/import-way/stream-load-manual.md
+++ b/docs/data-operate/import/import-way/stream-load-manual.md
@@ -378,48 +378,6 @@ The return result parameters are explained in the 
following table:
 
 Users can access the ErrorURL to review data that failed to import due to 
issues with data quality. By executing the command `curl "<ErrorURL>"`, users 
can directly retrieve information about the erroneous data.
 
-## Application of Table Value Function in Stream Load - http_stream Mode
-
-Leveraging the recently introduced functionality of Table Value Function (TVF) 
in Doris, Stream Load now allows the expression of import parameters through 
SQL statements. Specifically, a TVF named `http_stream` has been dedicated for 
Stream Load operations.
-
-:::tip
-
-When performing Stream Load using the TVF `http_stream`, the Rest API URL 
differs from the standard URL used for regular Stream Load imports.
-
-- Standard Stream Load URL:
-  `http://fe_host:http_port/api/{db}/{table}/_stream_load`
-- URL for Stream Load using TVF `http_stream`:
-  `http://fe_host:http_port/api/_http_stream`
-
-:::
-
-Using curl for Stream Load in http_stream Mode:
-
-```shell
-curl --location-trusted -u user:passwd [-H "sql: ${load_sql}"...] -T data.file 
-XPUT http://fe_host:http_port/api/_http_stream
-```
-
-Adding a SQL parameter in the header to replace the previous parameters such 
as `column_separator`, `line_delimiter`, `where`, `columns`, etc., makes it 
very convenient to use.
-
-Example of load SQL:
-
-```shell
-insert into db.table (col, ...) select stream_col, ... from 
http_stream("property1"="value1");
-```
-
-http_stream parameter:
-
-- "column_separator" = ","
-
-- "format" = "CSV"
-- ...
-
-For example:
-
-```Plain
-curl  --location-trusted -u root: -T test.csv  -H "sql:insert into 
demo.example_tbl_1(user_id, age, cost) select c1, c4, c7 * 2 from 
http_stream(\"format\" = \"CSV\", \"column_separator\" = \",\" ) where age >= 
30"  http://127.0.0.1:28030/api/_http_stream
-```
-
 ## Load example
 
 ### Setting load timeout and maximum size
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-operate/import/import-way/routine-load-manual.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-operate/import/import-way/routine-load-manual.md
index 368bd4fcfa1..013e6a72799 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-operate/import/import-way/routine-load-manual.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-operate/import/import-way/routine-load-manual.md
@@ -323,65 +323,20 @@ FROM KAFKA [data_source_properties]
 
 **01 FE 配置参数**
 
-**max_routine_load_task_concurrent_num**
-
-- 默认值:256
-
-- 动态配置:是
-
-- FE Master 独有配置:是
-
-- 参数描述:限制 Routine Load 的导入作业最大子并发数量。建议维持在默认值。如果设置过大,可能导致并发任务数过多,占用集群资源。
-
-**max_routine_load_task_num_per_be**
-
-- 默认值:1024
-
-- 动态配置:是
-
-- FE Master 独有配置:是
-
-- 参数描述:每个 BE 限制的最大并发 Routine Load 任务数。`max_routine_load_task_num_per_be` 应该小 
`routine_load_thread_pool_size` 于参数。
-
-**max_routine_load_job_num**
-
-- 默认值:100
-
-- 动态配置:是
-
-- FE Master 独有配置:是
-
-- 参数描述:限制最大 Routine Load 作业数,包括 NEED_SCHEDULED,RUNNING,PAUSE
-
-**max_tolerable_backend_down_num**
-
-- 默认值:0
-
-- 动态配置:是
-
-- FE Master 独有配置:是
-
-- 参数描述:只要有一个 BE 宕机,Routine Load 就无法自动恢复。在满足某些条件时,Doris 可以将 PAUSED 的任务重新调度,转换为 
RUNNING 状态。该参数为 0 表示只有所有 BE 节点都是 alive 状态踩允许重新调度。
-
-**period_of_auto_resume_min**
-
-- 默认值:5(分钟)
-
-- 动态配置:是
-
-- FE Master 独有配置:是
-
-- 参数描述:自动恢复 Routine Load 的周期
+| 参数名称                          | 默认值 | 动态配置 | FE Master 独有配置 | 参数描述           
                                                                          |
+|-----------------------------------|--------|----------|---------------------|----------------------------------------------------------------------------------------------|
+| max_routine_load_task_concurrent_num | 256    | 是       | 是                  
| 限制 Routine Load 的导入作业最大子并发数量。建议维持在默认值。如果设置过大,可能导致并发任务数过多,占用集群资源。 |
+| max_routine_load_task_num_per_be  | 1024   | 是       | 是                  | 
每个 BE 限制的最大并发 Routine Load 任务数。`max_routine_load_task_num_per_be` 应该小于 
`routine_load_thread_pool_size`。 |
+| max_routine_load_job_num           | 100    | 是       | 是                  | 
限制最大 Routine Load 作业数,包括 NEED_SCHEDULED,RUNNING,PAUSE。                        |
+| max_tolerable_backend_down_num     | 0      | 是       | 是                  | 
只要有一个 BE 宕机,Routine Load 就无法自动恢复。在满足某些条件时,Doris 可以将 PAUSED 的任务重新调度,转换为 RUNNING 
状态。该参数为 0 表示只有所有 BE 节点都是 alive 状态时允许重新调度。 |
+| period_of_auto_resume_min          | 5(分钟) | 是       | 是                  | 
自动恢复 Routine Load 的周期。                                                          
     |
 
 **02 BE 配置参数**
 
-**max_consumer_num_per_group**
 
-- 默认值:3
-
-- 动态配置:是
-
-- 描述:一个子任务重最多生成几个 consumer 消费数据。对于 Kafka 数据源,一个 consumer 可能消费一个或多个 Kafka 
Partition。假设一个任务需要消费 6 个 Kafka Partitio,则会生成 3 个 consumer,每个 consumer 消费 2 个 
partition。如果只有 2 个 partition,则只会生成 2 个 consumer,每个 consumer 消费 1 个 partition。
+| 参数名称                     | 默认值 | 动态配置 | 描述                                   
                                                                        |
+|------------------------------|--------|----------|----------------------------------------------------------------------------------------------------------------|
+| max_consumer_num_per_group   | 3      | 是       | 一个子任务最多生成几个 consumer 消费
 
 **03 导入配置参数**
 
@@ -1648,169 +1603,95 @@ mysql> SELECT * FROM routine_test08;
 
 **导入 SSL 认证的 Kafka 数据**
 
-1. 导入数据样例
-
-    ```sql
-    { "id" : 1, "name" : "Benjamin", "age":18 }
-    { "id" : 2, "name" : "Emily", "age":20 }
-    { "id" : 3, "name" : "Alexander", "age":22 }
-    ```
-
-2. 建表结构
-
-    ```sql
-    CREATE TABLE demo.routine_test20 (
-        id      INT            NOT NULL  COMMENT "id",
-        name    VARCHAR(30)    NOT NULL  COMMENT "name",
-        age     INT                      COMMENT "age"
-    )
-    DUPLICATE KEY(`id`)
-    DISTRIBUTED BY HASH(`id`) BUCKETS 1;
-    ```
-
-3. 导入命令
+导入命令样例:
 
-    ```sql
-    CREATE ROUTINE LOAD demo.kafka_job20 ON routine_test20
-            PROPERTIES
-            (
-                "format" = "json"
-            )
-            FROM KAFKA
-            (
-                "kafka_broker_list" = "192.168.100.129:9092",
-                "kafka_topic" = "routineLoad21",
-                "property.security.protocol" = "ssl",
-                "property.ssl.ca.location" = "FILE:ca.pem",
-                "property.ssl.certificate.location" = "FILE:client.pem",
-                "property.ssl.key.location" = "FILE:client.key",
-                "property.ssl.key.password" = "ssl_passwd"
-            );  
-    ```
+```SQL
+CREATE ROUTINE LOAD demo.kafka_job20 ON routine_test20
+        PROPERTIES
+        (
+            "format" = "json"
+        )
+        FROM KAFKA
+        (
+            "kafka_broker_list" = "192.168.100.129:9092",
+            "kafka_topic" = "routineLoad21",
+            "property.security.protocol" = "ssl",
+            "property.ssl.ca.location" = "FILE:ca.pem",
+            "property.ssl.certificate.location" = "FILE:client.pem",
+            "property.ssl.key.location" = "FILE:client.key",
+            "property.ssl.key.password" = "ssl_passwd"
+        );  
+```
 
-4. 导入结果
+参数说明:
 
-    ```sql
-    mysql> select * from routine_test20;
-    +------+----------------+------+
-    | id   | name           | age  |
-    +------+----------------+------+
-    |    1 | Benjamin       |   18 |
-    |    2 | Emily          |   20 |
-    |    3 | Alexander      |   22 |
-    +------+----------------+------+
-    3 rows in set (0.01 sec)
-    ```
+| 参数                              | 介绍                                         
                |
+| --------------------------------- | 
------------------------------------------------------------ |
+| property.security.protocol        | 使用的安全协议,如上述的例子使用的是 SSL                   
  |
+| property.ssl.ca.location          | CA(Certificate Authority)证书的位置           
             |
+| property.ssl.certificate.location | (如果 Kafka server 端开启了 client 
认证才需要配置)Client 的 public key 的位置 |
+| property.ssl.key.location         | (如果 Kafka server 端开启了 client 
认证才需要配置)Client 的 private key 的位置 |
+| property.ssl.key.password         | (如果 Kafka server 端开启了 client 
认证才需要配置)Client 的 private key 的密码 |
 
 **导入 Kerberos 认证的 Kafka 数据**
 
-1. 导入数据样例
-
-    ```sql
-    { "id" : 1, "name" : "Benjamin", "age":18 }
-    { "id" : 2, "name" : "Emily", "age":20 }
-    { "id" : 3, "name" : "Alexander", "age":22 }
-    ```
-
-2. 建表结构
+导入命令样例:
 
-    ```sql
-    CREATE TABLE demo.routine_test21 (
-        id      INT            NOT NULL  COMMENT "id",
-        name    VARCHAR(30)    NOT NULL  COMMENT "name",
-        age     INT                      COMMENT "age"
-    )
-    DUPLICATE KEY(`id`)
-    DISTRIBUTED BY HASH(`id`) BUCKETS 1;
-    ```
-
-3. 导入命令
-
-    ```sql
-    CREATE ROUTINE LOAD demo.kafka_job21 ON routine_test21
-    PROPERTIES
-    (
-        "format" = "json"
-    )
-    FROM KAFKA
-    (
-        "kafka_broker_list" = "192.168.100.129:9092",
-        "kafka_topic" = "routineLoad21",
-        "property.security.protocol" = "SASL_PLAINTEXT",
-        "property.sasl.kerberos.service.name" = "kafka",
-        "property.sasl.kerberos.keytab"="/path/to/kafka_client.keytab",
-        "property.sasl.kerberos.principal" = 
"clients/stream.dt.lo...@example.com"
-    );  
-    ```
-
-4. 导入结果
-
-    ```sql
-    mysql> select * from routine_test21;
-    +------+----------------+------+
-    | id   | name           | age  |
-    +------+----------------+------+
-    |    1 | Benjamin       |   18 |
-    |    2 | Emily          |   20 |
-    |    3 | Alexander      |   22 |
-    +------+----------------+------+
-    3 rows in set (0.01 sec)
-    ```
-
-**导入 PLAIN 认证的 Kafka 集群**
-
-1. 导入数据样例
+```SQL
+CREATE ROUTINE LOAD demo.kafka_job21 ON routine_test21
+        PROPERTIES
+        (
+            "format" = "json"
+        )
+        FROM KAFKA
+        (
+            "kafka_broker_list" = "192.168.100.129:9092",
+            "kafka_topic" = "routineLoad21",
+            "property.security.protocol" = "SASL_PLAINTEXT",
+            "property.sasl.kerberos.service.name" = "kafka",
+            
"property.sasl.kerberos.keytab"="/opt/third/kafka/kerberos/kafka_client.keytab",
+            "property.sasl.kerberos.principal" = 
"clients/stream.dt.lo...@example.com"
+        );  
+```
 
-    ```sql
-    { "id" : 1, "name" : "Benjamin", "age":18 }
-    { "id" : 2, "name" : "Emily", "age":20 }
-    { "id" : 3, "name" : "Alexander", "age":22 }
-    ```
+参数说明:
 
-2. 建表结构
+| 参数                                | 介绍                                       
         |
+| ----------------------------------- | 
--------------------------------------------------- |
+| property.security.protocol          | 使用的安全协议,如上述的例子使用的是 SASL_PLAINTEXT |
+| property.sasl.kerberos.service.name | 指定 broker service name,默认是 Kafka       
       |
+| property.sasl.kerberos.keytab       | keytab 文件的位置                           
        |
+| property.sasl.kerberos.principal    | 指定 kerberos principal                  
           |
 
-    ```sql
-    CREATE TABLE demo.routine_test22 (
-        id      INT            NOT NULL  COMMENT "id",
-        name    VARCHAR(30)    NOT NULL  COMMENT "name",
-        age     INT                      COMMENT "age"
-    )
-    DUPLICATE KEY(`id`)
-    DISTRIBUTED BY HASH(`id`) BUCKETS 1;
-    ```
+导入 PLAIN 认证的 Kafka 集群
 
-3. 导入命令
+1. 导入命令样例:
 
-    ```sql
-    CREATE ROUTINE LOAD demo.kafka_job22 ON routine_test22
-            PROPERTIES
-            (
-                "format" = "json"
-            )
-            FROM KAFKA
-            (
-                "kafka_broker_list" = "192.168.100.129:9092",
-                "kafka_topic" = "routineLoad22",
-                "property.security.protocol"="SASL_PLAINTEXT",
-                "property.sasl.mechanism"="PLAIN",
-                "property.sasl.username"="admin",
-                "property.sasl.password"="admin"
-            );  
-    ```
+```SQL
+CREATE ROUTINE LOAD demo.kafka_job22 ON routine_test22
+        PROPERTIES
+        (
+            "format" = "json"
+        )
+        FROM KAFKA
+        (
+            "kafka_broker_list" = "192.168.100.129:9092",
+            "kafka_topic" = "routineLoad22",
+            "property.security.protocol"="SASL_PLAINTEXT",
+            "property.sasl.mechanism"="PLAIN",
+            "property.sasl.username"="admin",
+            "property.sasl.password"="admin"
+        );  
+```
 
-4. 导入结果
+参数说明:
 
-    ```sql
-    mysql> select * from routine_test22;
-    +------+----------------+------+
-    | id   | name           | age  |
-    +------+----------------+------+
-    |    1 | Benjamin       |   18 |
-    |    2 | Emily          |   20 |
-    |    3 | Alexander      |   22 |
-    +------+----------------+------+
-    3 rows in set (0.02 sec)
-    ```
+| 参数                       | 介绍                                                
|
+| -------------------------- | 
--------------------------------------------------- |
+| property.security.protocol | 使用的安全协议,如上述的例子使用的是 SASL_PLAINTEXT |
+| property.sasl.mechanism    | 指定 SASL 认证机制为 PLAIN                          |
+| property.sasl.username     | SASL 的用户名                                       
|
+| property.sasl.password     | SASL 的密码                                        
 |
 
 ### 一流多表导入
 
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-operate/import/import-way/stream-load-manual.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-operate/import/import-way/stream-load-manual.md
index 7f04752770d..dbad10d0f36 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-operate/import/import-way/stream-load-manual.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-operate/import/import-way/stream-load-manual.md
@@ -359,7 +359,11 @@ Stream Load 是一种同步的导入方式,导入结果会通过创建导入
 | ---------------------- | 
------------------------------------------------------------ |
 | TxnId                  | 导入事务的 ID                                            
    |
 | Label                  | 导入作业的 label,通过 -H "label:<label_id>" 指定            |
-| Status                 | 导入的最终状态 Success:表示导入成功 Publish 
Timeout:该状态也表示导入已经完成,只是数据可能会延迟可见,无需重试 Label Already Exists:Label 重复,需要更换 
labelFail:导入失败 |
+| Status                 | 导入的最终状态                                             
 |
+|                        | - Success:表示导入成功                                    
 |
+|                        | - Publish Timeout:该状态也表示导入已经完成,但数据可能会延迟可见,无需重试 |
+|                        | - Label Already Exists:Label 重复,需要更换 label         |
+|                        | - Fail:导入失败                                         
   |
 | ExistingJobStatus      | 已存在的 Label 对应的导入作业的状态。这个字段只有在当 Status 为 "Label 
Already Exists" 时才会显示。用户可以通过这个状态,知晓已存在 Label 对应的导入作业的状态。"RUNNING" 
表示作业还在执行,"FINISHED" 表示作业成功。 |
 | Message                | 导入错误信息                                              
   |
 | NumberTotalRows        | 导入总处理的行数                                            
 |
@@ -377,49 +381,6 @@ Stream Load 是一种同步的导入方式,导入结果会通过创建导入
 
 通过 ErrorURL 可以查看因为数据质量不佳导致的导入失败数据。使用命令 `curl "<ErrorURL>"` 命令直接查看错误数据的信息。
 
-## TVF 在 Stream Load 中的应用 - http_stream 模式
-
-依托 Doris 最新引入的 Table Value Function(TVF)的功能,在 Stream Load 中,可以通过使用 SQL 
表达式来表达导入的参数。这个专门为 Stream Load 提供的 TVF 为 http_stream。
-
-:::caution
-注意
-
-使用 TVF http_stream 进行 Stream Load 导入时的 Rest API URL 不同于 Stream Load 普通导入的 URL。
-
-- 普通导入的 URL 为:
-  
-    http://fe_host:http_port/api/{db}/{table}/_stream_load
-
-- 使用 TVF http_stream 导入的 URL 为:
-
-    http://fe_host:http_port/api/_http_stream
-:::
-
-使用 `curl` 来使用 Stream Load 的 http stream 模式:
-```shell
-curl --location-trusted -u user:passwd [-H "sql: ${load_sql}"...] -T data.file 
-XPUT http://fe_host:http_port/api/_http_stream
-```
-
-在 Header 
中添加一个`sql`的参数,去替代之前参数中的`column_separator`、`line_delimiter`、`where`、`columns`等参数,使用起来非常方便。
-
-load_sql 举例:
-
-```shell
-insert into db.table (col, ...) select stream_col, ... from 
http_stream("property1"="value1");
-```
-
-http_stream 支持的参数:
-
-"column_separator" = ",", "format" = "CSV",
-
-...
-
-示例:
-
-```Plain
-curl  --location-trusted -u root: -T test.csv  -H "sql:insert into 
demo.example_tbl_1(user_id, age, cost) select c1, c4, c7 * 2 from 
http_stream(\"format\" = \"CSV\", \"column_separator\" = \",\" ) where age >= 
30"  http://127.0.0.1:28030/api/_http_stream
-```
-
 ## 导入举例
 
 ### 设置导入超时时间与最大导入
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/ecosystem/doris-streamloader.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/ecosystem/doris-streamloader.md
index 09912965841..9c8b5d21b90 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/ecosystem/doris-streamloader.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/ecosystem/doris-streamloader.md
@@ -93,12 +93,6 @@ doris-streamloader --source_file={FILE_LIST} 
--url={FE_OR_BE_SERVER_URL}:{PORT}
   doris-streamloader --source_file="dir1,dir2,dir3" 
--url="http://localhost:8330"; --header="column_separator:|?columns:col1,col2" 
--db="testdb" --table="testtbl" 
    ```
 
-:::tip 
-当需要多个文件导入时,使用 Doris Streamloader 也只会产生一个版本号 
-:::
-
-
-
 **2. `STREAMLOAD_HEADER` 支持 Stream Load 的所有参数,多个参数之间用  '?' 分隔。**
 
 用法举例:
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.2/ecosystem/doris-streamloader.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.2/ecosystem/doris-streamloader.md
index 95b7ab11e84..d40078b9018 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.2/ecosystem/doris-streamloader.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.2/ecosystem/doris-streamloader.md
@@ -93,12 +93,6 @@ doris-streamloader --source_file={FILE_LIST} 
--url={FE_OR_BE_SERVER_URL}:{PORT}
   doris-streamloader --source_file="dir1,dir2,dir3" 
--url="http://localhost:8330"; --header="column_separator:|?columns:col1,col2" 
--db="testdb" --table="testtbl" 
    ```
 
-:::tip 
-当需要多个文件导入时,使用 Doris Streamloader 也只会产生一个版本号 
-:::
-
-
-
 **2.** `STREAMLOAD_HEADER` **支持 Stream Load 的所有参数,多个参数之间用  '?' 分隔。**
 
 用法举例:
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.0/data-operate/import/stream-load-manual.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.0/data-operate/import/stream-load-manual.md
index 35d8925ee75..4eeb718b20c 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.0/data-operate/import/stream-load-manual.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.0/data-operate/import/stream-load-manual.md
@@ -373,7 +373,11 @@ Stream Load 是一种同步的导入方式,导入结果会通过创建导入
 | ---------------------- | 
------------------------------------------------------------ |
 | TxnId                  | 导入事务的 ID                                            
    |
 | Label                  | 导入作业的 label,通过 -H "label:<label_id>" 指定            |
-| Status                 | 导入的最终状态 Success:表示导入成功 Publish 
Timeout:该状态也表示导入已经完成,只是数据可能会延迟可见,无需重试 Label Already Exists:Label 重复,需要更换 
labelFail:导入失败 |
+| Status                 | 导入的最终状态                                             
 |
+|                        | - Success:表示导入成功                                    
 |
+|                        | - Publish Timeout:该状态也表示导入已经完成,但数据可能会延迟可见,无需重试 |
+|                        | - Label Already Exists:Label 重复,需要更换 label         |
+|                        | - Fail:导入失败                                         
   |
 | ExistingJobStatus      | 已存在的 Label 对应的导入作业的状态。这个字段只有在当 Status 为 "Label 
Already Exists" 时才会显示。用户可以通过这个状态,知晓已存在 Label 对应的导入作业的状态。"RUNNING" 
表示作业还在执行,"FINISHED" 表示作业成功。 |
 | Message                | 导入错误信息                                              
   |
 | NumberTotalRows        | 导入总处理的行数                                            
 |
@@ -391,49 +395,6 @@ Stream Load 是一种同步的导入方式,导入结果会通过创建导入
 
 通过 ErrorURL 可以查看因为数据质量不佳导致的导入失败数据。使用命令 `curl "<ErrorURL>"` 命令直接查看错误数据的信息。
 
-## TVF 在 Stream Load 中的应用 - http_stream 模式
-
-依托 Doris 最新引入的 Table Value Function(TVF)的功能,在 Stream Load 中,可以通过使用 SQL 
表达式来表达导入的参数。这个专门为 Stream Load 提供的 TVF 为 http_stream。
-
-:::caution
-注意
-
-使用 TVF http_stream 进行 Stream Load 导入时的 Rest API URL 不同于 Stream Load 普通导入的 URL。
-
-- 普通导入的 URL 为:
-  
-    http://fe_host:http_port/api/{db}/{table}/_stream_load
-
-- 使用 TVF http_stream 导入的 URL 为:
-
-    http://fe_host:http_port/api/_http_stream
-:::
-
-使用 `curl` 来使用 Stream Load 的 http stream 模式:
-```shell
-curl --location-trusted -u user:passwd [-H "sql: ${load_sql}"...] -T data.file 
-XPUT http://fe_host:http_port/api/_http_stream
-```
-
-在 Header 
中添加一个`sql`的参数,去替代之前参数中的`column_separator`、`line_delimiter`、`where`、`columns`等参数,使用起来非常方便。
-
-load_sql 举例:
-
-```shell
-insert into db.table (col, ...) select stream_col, ... from 
http_stream("property1"="value1");
-```
-
-http_stream 支持的参数:
-
-"column_separator" = ",", "format" = "CSV",
-
-...
-
-示例:
-
-```Plain
-curl  --location-trusted -u root: -T test.csv  -H "sql:insert into 
demo.example_tbl_1(user_id, age, cost) select c1, c4, c7 * 2 from 
http_stream(\"format\" = \"CSV\", \"column_separator\" = \",\" ) where age >= 
30"  http://127.0.0.1:28030/api/_http_stream
-```
-
 ## 导入举例
 
 ### 设置导入超时时间与最大导入
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.0/ecosystem/doris-streamloader.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.0/ecosystem/doris-streamloader.md
index e083e3686b5..b001ccfe5c3 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.0/ecosystem/doris-streamloader.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.0/ecosystem/doris-streamloader.md
@@ -93,12 +93,6 @@ doris-streamloader --source_file={FILE_LIST} 
--url={FE_OR_BE_SERVER_URL}:{PORT}
   doris-streamloader --source_file="dir1,dir2,dir3" 
--url="http://localhost:8330"; --header="column_separator:|?columns:col1,col2" 
--db="testdb" --table="testtbl" 
    ```
 
-:::tip 
-当需要多个文件导入时,使用 Doris Streamloader 也只会产生一个版本号 
-:::
-
-
-
 **2.** `STREAMLOAD_HEADER` **支持 Stream Load 的所有参数,多个参数之间用  '?' 分隔。**
 
 用法举例:
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/data-operate/import/import-way/routine-load-manual.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/data-operate/import/import-way/routine-load-manual.md
index e7049748191..b2c37831c7c 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/data-operate/import/import-way/routine-load-manual.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/data-operate/import/import-way/routine-load-manual.md
@@ -323,65 +323,20 @@ FROM KAFKA [data_source_properties]
 
 **01 FE 配置参数**
 
-**max_routine_load_task_concurrent_num**
-
-- 默认值:256
-
-- 动态配置:是
-
-- FE Master 独有配置:是
-
-- 参数描述:限制 Routine Load 的导入作业最大子并发数量。建议维持在默认值。如果设置过大,可能导致并发任务数过多,占用集群资源。
-
-**max_routine_load_task_num_per_be**
-
-- 默认值:1024
-
-- 动态配置:是
-
-- FE Master 独有配置:是
-
-- 参数描述:每个 BE 限制的最大并发 Routine Load 任务数。`max_routine_load_task_num_per_be` 应该小 
`routine_load_thread_pool_size` 于参数。
-
-**max_routine_load_job_num**
-
-- 默认值:100
-
-- 动态配置:是
-
-- FE Master 独有配置:是
-
-- 参数描述:限制最大 Routine Load 作业数,包括 NEED_SCHEDULED,RUNNING,PAUSE
-
-**max_tolerable_backend_down_num**
-
-- 默认值:0
-
-- 动态配置:是
-
-- FE Master 独有配置:是
-
-- 参数描述:只要有一个 BE 宕机,Routine Load 就无法自动恢复。在满足某些条件时,Doris 可以将 PAUSED 的任务重新调度,转换为 
RUNNING 状态。该参数为 0 表示只有所有 BE 节点都是 alive 状态踩允许重新调度。
-
-**period_of_auto_resume_min**
-
-- 默认值:5(分钟)
-
-- 动态配置:是
-
-- FE Master 独有配置:是
-
-- 参数描述:自动恢复 Routine Load 的周期
+| 参数名称                          | 默认值 | 动态配置 | FE Master 独有配置 | 参数描述           
                                                                          |
+|-----------------------------------|--------|----------|---------------------|----------------------------------------------------------------------------------------------|
+| max_routine_load_task_concurrent_num | 256    | 是       | 是                  
| 限制 Routine Load 的导入作业最大子并发数量。建议维持在默认值。如果设置过大,可能导致并发任务数过多,占用集群资源。 |
+| max_routine_load_task_num_per_be  | 1024   | 是       | 是                  | 
每个 BE 限制的最大并发 Routine Load 任务数。`max_routine_load_task_num_per_be` 应该小于 
`routine_load_thread_pool_size`。 |
+| max_routine_load_job_num           | 100    | 是       | 是                  | 
限制最大 Routine Load 作业数,包括 NEED_SCHEDULED,RUNNING,PAUSE。                        |
+| max_tolerable_backend_down_num     | 0      | 是       | 是                  | 
只要有一个 BE 宕机,Routine Load 就无法自动恢复。在满足某些条件时,Doris 可以将 PAUSED 的任务重新调度,转换为 RUNNING 
状态。该参数为 0 表示只有所有 BE 节点都是 alive 状态时允许重新调度。 |
+| period_of_auto_resume_min          | 5(分钟) | 是       | 是                  | 
自动恢复 Routine Load 的周期。                                                          
     |
 
 **02 BE 配置参数**
 
-**max_consumer_num_per_group**
 
-- 默认值:3
-
-- 动态配置:是
-
-- 描述:一个子任务重最多生成几个 consumer 消费数据。对于 Kafka 数据源,一个 consumer 可能消费一个或多个 Kafka 
Partition。假设一个任务需要消费 6 个 Kafka Partitio,则会生成 3 个 consumer,每个 consumer 消费 2 个 
partition。如果只有 2 个 partition,则只会生成 2 个 consumer,每个 consumer 消费 1 个 partition。
+| 参数名称                     | 默认值 | 动态配置 | 描述                                   
                                                                        |
+|------------------------------|--------|----------|----------------------------------------------------------------------------------------------------------------|
+| max_consumer_num_per_group   | 3      | 是       | 一个子任务最多生成几个 consumer 消费
 
 **03 导入配置参数**
 
@@ -1648,169 +1603,95 @@ mysql> SELECT * FROM routine_test08;
 
 **导入 SSL 认证的 Kafka 数据**
 
-1. 导入数据样例
-
-    ```sql
-    { "id" : 1, "name" : "Benjamin", "age":18 }
-    { "id" : 2, "name" : "Emily", "age":20 }
-    { "id" : 3, "name" : "Alexander", "age":22 }
-    ```
-
-2. 建表结构
-
-    ```sql
-    CREATE TABLE demo.routine_test20 (
-        id      INT            NOT NULL  COMMENT "id",
-        name    VARCHAR(30)    NOT NULL  COMMENT "name",
-        age     INT                      COMMENT "age"
-    )
-    DUPLICATE KEY(`id`)
-    DISTRIBUTED BY HASH(`id`) BUCKETS 1;
-    ```
-
-3. 导入命令
+导入命令样例:
 
-    ```sql
-    CREATE ROUTINE LOAD demo.kafka_job20 ON routine_test20
-            PROPERTIES
-            (
-                "format" = "json"
-            )
-            FROM KAFKA
-            (
-                "kafka_broker_list" = "192.168.100.129:9092",
-                "kafka_topic" = "routineLoad21",
-                "property.security.protocol" = "ssl",
-                "property.ssl.ca.location" = "FILE:ca.pem",
-                "property.ssl.certificate.location" = "FILE:client.pem",
-                "property.ssl.key.location" = "FILE:client.key",
-                "property.ssl.key.password" = "ssl_passwd"
-            );  
-    ```
+```SQL
+CREATE ROUTINE LOAD demo.kafka_job20 ON routine_test20
+        PROPERTIES
+        (
+            "format" = "json"
+        )
+        FROM KAFKA
+        (
+            "kafka_broker_list" = "192.168.100.129:9092",
+            "kafka_topic" = "routineLoad21",
+            "property.security.protocol" = "ssl",
+            "property.ssl.ca.location" = "FILE:ca.pem",
+            "property.ssl.certificate.location" = "FILE:client.pem",
+            "property.ssl.key.location" = "FILE:client.key",
+            "property.ssl.key.password" = "ssl_passwd"
+        );  
+```
 
-4. 导入结果
+参数说明:
 
-    ```sql
-    mysql> select * from routine_test20;
-    +------+----------------+------+
-    | id   | name           | age  |
-    +------+----------------+------+
-    |    1 | Benjamin       |   18 |
-    |    2 | Emily          |   20 |
-    |    3 | Alexander      |   22 |
-    +------+----------------+------+
-    3 rows in set (0.01 sec)
-    ```
+| 参数                              | 介绍                                         
                |
+| --------------------------------- | 
------------------------------------------------------------ |
+| property.security.protocol        | 使用的安全协议,如上述的例子使用的是 SSL                   
  |
+| property.ssl.ca.location          | CA(Certificate Authority)证书的位置           
             |
+| property.ssl.certificate.location | (如果 Kafka server 端开启了 client 
认证才需要配置)Client 的 public key 的位置 |
+| property.ssl.key.location         | (如果 Kafka server 端开启了 client 
认证才需要配置)Client 的 private key 的位置 |
+| property.ssl.key.password         | (如果 Kafka server 端开启了 client 
认证才需要配置)Client 的 private key 的密码 |
 
 **导入 Kerberos 认证的 Kafka 数据**
 
-1. 导入数据样例
-
-    ```sql
-    { "id" : 1, "name" : "Benjamin", "age":18 }
-    { "id" : 2, "name" : "Emily", "age":20 }
-    { "id" : 3, "name" : "Alexander", "age":22 }
-    ```
-
-2. 建表结构
+导入命令样例:
 
-    ```sql
-    CREATE TABLE demo.routine_test21 (
-        id      INT            NOT NULL  COMMENT "id",
-        name    VARCHAR(30)    NOT NULL  COMMENT "name",
-        age     INT                      COMMENT "age"
-    )
-    DUPLICATE KEY(`id`)
-    DISTRIBUTED BY HASH(`id`) BUCKETS 1;
-    ```
-
-3. 导入命令
-
-    ```sql
-    CREATE ROUTINE LOAD demo.kafka_job21 ON routine_test21
-    PROPERTIES
-    (
-        "format" = "json"
-    )
-    FROM KAFKA
-    (
-        "kafka_broker_list" = "192.168.100.129:9092",
-        "kafka_topic" = "routineLoad21",
-        "property.security.protocol" = "SASL_PLAINTEXT",
-        "property.sasl.kerberos.service.name" = "kafka",
-        "property.sasl.kerberos.keytab"="/path/to/kafka_client.keytab",
-        "property.sasl.kerberos.principal" = 
"clients/stream.dt.lo...@example.com"
-    );  
-    ```
-
-4. 导入结果
-
-    ```sql
-    mysql> select * from routine_test21;
-    +------+----------------+------+
-    | id   | name           | age  |
-    +------+----------------+------+
-    |    1 | Benjamin       |   18 |
-    |    2 | Emily          |   20 |
-    |    3 | Alexander      |   22 |
-    +------+----------------+------+
-    3 rows in set (0.01 sec)
-    ```
-
-**导入 PLAIN 认证的 Kafka 集群**
-
-1. 导入数据样例
+```SQL
+CREATE ROUTINE LOAD demo.kafka_job21 ON routine_test21
+        PROPERTIES
+        (
+            "format" = "json"
+        )
+        FROM KAFKA
+        (
+            "kafka_broker_list" = "192.168.100.129:9092",
+            "kafka_topic" = "routineLoad21",
+            "property.security.protocol" = "SASL_PLAINTEXT",
+            "property.sasl.kerberos.service.name" = "kafka",
+            
"property.sasl.kerberos.keytab"="/opt/third/kafka/kerberos/kafka_client.keytab",
+            "property.sasl.kerberos.principal" = 
"clients/stream.dt.lo...@example.com"
+        );  
+```
 
-    ```sql
-    { "id" : 1, "name" : "Benjamin", "age":18 }
-    { "id" : 2, "name" : "Emily", "age":20 }
-    { "id" : 3, "name" : "Alexander", "age":22 }
-    ```
+参数说明:
 
-2. 建表结构
+| 参数                                | 介绍                                       
         |
+| ----------------------------------- | 
--------------------------------------------------- |
+| property.security.protocol          | 使用的安全协议,如上述的例子使用的是 SASL_PLAINTEXT |
+| property.sasl.kerberos.service.name | 指定 broker service name,默认是 Kafka       
       |
+| property.sasl.kerberos.keytab       | keytab 文件的位置                           
        |
+| property.sasl.kerberos.principal    | 指定 kerberos principal                  
           |
 
-    ```sql
-    CREATE TABLE demo.routine_test22 (
-        id      INT            NOT NULL  COMMENT "id",
-        name    VARCHAR(30)    NOT NULL  COMMENT "name",
-        age     INT                      COMMENT "age"
-    )
-    DUPLICATE KEY(`id`)
-    DISTRIBUTED BY HASH(`id`) BUCKETS 1;
-    ```
+导入 PLAIN 认证的 Kafka 集群
 
-3. 导入命令
+1. 导入命令样例:
 
-    ```sql
-    CREATE ROUTINE LOAD demo.kafka_job22 ON routine_test22
-            PROPERTIES
-            (
-                "format" = "json"
-            )
-            FROM KAFKA
-            (
-                "kafka_broker_list" = "192.168.100.129:9092",
-                "kafka_topic" = "routineLoad22",
-                "property.security.protocol"="SASL_PLAINTEXT",
-                "property.sasl.mechanism"="PLAIN",
-                "property.sasl.username"="admin",
-                "property.sasl.password"="admin"
-            );  
-    ```
+```SQL
+CREATE ROUTINE LOAD demo.kafka_job22 ON routine_test22
+        PROPERTIES
+        (
+            "format" = "json"
+        )
+        FROM KAFKA
+        (
+            "kafka_broker_list" = "192.168.100.129:9092",
+            "kafka_topic" = "routineLoad22",
+            "property.security.protocol"="SASL_PLAINTEXT",
+            "property.sasl.mechanism"="PLAIN",
+            "property.sasl.username"="admin",
+            "property.sasl.password"="admin"
+        );  
+```
 
-4. 导入结果
+参数说明:
 
-    ```sql
-    mysql> select * from routine_test22;
-    +------+----------------+------+
-    | id   | name           | age  |
-    +------+----------------+------+
-    |    1 | Benjamin       |   18 |
-    |    2 | Emily          |   20 |
-    |    3 | Alexander      |   22 |
-    +------+----------------+------+
-    3 rows in set (0.02 sec)
-    ```
+| 参数                       | 介绍                                                
|
+| -------------------------- | 
--------------------------------------------------- |
+| property.security.protocol | 使用的安全协议,如上述的例子使用的是 SASL_PLAINTEXT |
+| property.sasl.mechanism    | 指定 SASL 认证机制为 PLAIN                          |
+| property.sasl.username     | SASL 的用户名                                       
|
+| property.sasl.password     | SASL 的密码                                        
 |
 
 ### 一流多表导入
 
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/data-operate/import/import-way/stream-load-manual.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/data-operate/import/import-way/stream-load-manual.md
index 07f868cf98c..2c377d00df0 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/data-operate/import/import-way/stream-load-manual.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/data-operate/import/import-way/stream-load-manual.md
@@ -358,7 +358,11 @@ Stream Load 是一种同步的导入方式,导入结果会通过创建导入
 | ---------------------- | 
------------------------------------------------------------ |
 | TxnId                  | 导入事务的 ID                                            
    |
 | Label                  | 导入作业的 label,通过 -H "label:<label_id>" 指定            |
-| Status                 | 导入的最终状态 Success:表示导入成功 Publish 
Timeout:该状态也表示导入已经完成,只是数据可能会延迟可见,无需重试 Label Already Exists:Label 重复,需要更换 
labelFail:导入失败 |
+| Status                 | 导入的最终状态                                             
 |
+|                        | - Success:表示导入成功                                    
 |
+|                        | - Publish Timeout:该状态也表示导入已经完成,但数据可能会延迟可见,无需重试 |
+|                        | - Label Already Exists:Label 重复,需要更换 label         |
+|                        | - Fail:导入失败                                         
   |
 | ExistingJobStatus      | 已存在的 Label 对应的导入作业的状态。这个字段只有在当 Status 为 "Label 
Already Exists" 时才会显示。用户可以通过这个状态,知晓已存在 Label 对应的导入作业的状态。"RUNNING" 
表示作业还在执行,"FINISHED" 表示作业成功。 |
 | Message                | 导入错误信息                                              
   |
 | NumberTotalRows        | 导入总处理的行数                                            
 |
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/ecosystem/doris-streamloader.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/ecosystem/doris-streamloader.md
index e083e3686b5..b001ccfe5c3 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/ecosystem/doris-streamloader.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/ecosystem/doris-streamloader.md
@@ -93,12 +93,6 @@ doris-streamloader --source_file={FILE_LIST} 
--url={FE_OR_BE_SERVER_URL}:{PORT}
   doris-streamloader --source_file="dir1,dir2,dir3" 
--url="http://localhost:8330"; --header="column_separator:|?columns:col1,col2" 
--db="testdb" --table="testtbl" 
    ```
 
-:::tip 
-当需要多个文件导入时,使用 Doris Streamloader 也只会产生一个版本号 
-:::
-
-
-
 **2.** `STREAMLOAD_HEADER` **支持 Stream Load 的所有参数,多个参数之间用  '?' 分隔。**
 
 用法举例:
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/data-operate/import/import-way/routine-load-manual.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/data-operate/import/import-way/routine-load-manual.md
index 670b77edcce..c1f3f868943 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/data-operate/import/import-way/routine-load-manual.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/data-operate/import/import-way/routine-load-manual.md
@@ -323,65 +323,20 @@ FROM KAFKA [data_source_properties]
 
 **01 FE 配置参数**
 
-**max_routine_load_task_concurrent_num**
-
-- 默认值:256
-
-- 动态配置:是
-
-- FE Master 独有配置:是
-
-- 参数描述:限制 Routine Load 的导入作业最大子并发数量。建议维持在默认值。如果设置过大,可能导致并发任务数过多,占用集群资源。
-
-**max_routine_load_task_num_per_be**
-
-- 默认值:1024
-
-- 动态配置:是
-
-- FE Master 独有配置:是
-
-- 参数描述:每个 BE 限制的最大并发 Routine Load 任务数。`max_routine_load_task_num_per_be` 应该小 
`routine_load_thread_pool_size` 于参数。
-
-**max_routine_load_job_num**
-
-- 默认值:100
-
-- 动态配置:是
-
-- FE Master 独有配置:是
-
-- 参数描述:限制最大 Routine Load 作业数,包括 NEED_SCHEDULED,RUNNING,PAUSE
-
-**max_tolerable_backend_down_num**
-
-- 默认值:0
-
-- 动态配置:是
-
-- FE Master 独有配置:是
-
-- 参数描述:只要有一个 BE 宕机,Routine Load 就无法自动恢复。在满足某些条件时,Doris 可以将 PAUSED 的任务重新调度,转换为 
RUNNING 状态。该参数为 0 表示只有所有 BE 节点都是 alive 状态踩允许重新调度。
-
-**period_of_auto_resume_min**
-
-- 默认值:5(分钟)
-
-- 动态配置:是
-
-- FE Master 独有配置:是
-
-- 参数描述:自动恢复 Routine Load 的周期
+| 参数名称                          | 默认值 | 动态配置 | FE Master 独有配置 | 参数描述           
                                                                          |
+|-----------------------------------|--------|----------|---------------------|----------------------------------------------------------------------------------------------|
+| max_routine_load_task_concurrent_num | 256    | 是       | 是                  
| 限制 Routine Load 的导入作业最大子并发数量。建议维持在默认值。如果设置过大,可能导致并发任务数过多,占用集群资源。 |
+| max_routine_load_task_num_per_be  | 1024   | 是       | 是                  | 
每个 BE 限制的最大并发 Routine Load 任务数。`max_routine_load_task_num_per_be` 应该小于 
`routine_load_thread_pool_size`。 |
+| max_routine_load_job_num           | 100    | 是       | 是                  | 
限制最大 Routine Load 作业数,包括 NEED_SCHEDULED,RUNNING,PAUSE。                        |
+| max_tolerable_backend_down_num     | 0      | 是       | 是                  | 
只要有一个 BE 宕机,Routine Load 就无法自动恢复。在满足某些条件时,Doris 可以将 PAUSED 的任务重新调度,转换为 RUNNING 
状态。该参数为 0 表示只有所有 BE 节点都是 alive 状态时允许重新调度。 |
+| period_of_auto_resume_min          | 5(分钟) | 是       | 是                  | 
自动恢复 Routine Load 的周期。                                                          
     |
 
 **02 BE 配置参数**
 
-**max_consumer_num_per_group**
 
-- 默认值:3
-
-- 动态配置:是
-
-- 描述:一个子任务重最多生成几个 consumer 消费数据。对于 Kafka 数据源,一个 consumer 可能消费一个或多个 Kafka 
Partition。假设一个任务需要消费 6 个 Kafka Partitio,则会生成 3 个 consumer,每个 consumer 消费 2 个 
partition。如果只有 2 个 partition,则只会生成 2 个 consumer,每个 consumer 消费 1 个 partition。
+| 参数名称                     | 默认值 | 动态配置 | 描述                                   
                                                                        |
+|------------------------------|--------|----------|----------------------------------------------------------------------------------------------------------------|
+| max_consumer_num_per_group   | 3      | 是       | 一个子任务最多生成几个 consumer 消费
 
 **03 导入配置参数**
 
@@ -1648,169 +1603,95 @@ mysql> SELECT * FROM routine_test08;
 
 **导入 SSL 认证的 Kafka 数据**
 
-1. 导入数据样例
-
-    ```sql
-    { "id" : 1, "name" : "Benjamin", "age":18 }
-    { "id" : 2, "name" : "Emily", "age":20 }
-    { "id" : 3, "name" : "Alexander", "age":22 }
-    ```
-
-2. 建表结构
-
-    ```sql
-    CREATE TABLE demo.routine_test20 (
-        id      INT            NOT NULL  COMMENT "id",
-        name    VARCHAR(30)    NOT NULL  COMMENT "name",
-        age     INT                      COMMENT "age"
-    )
-    DUPLICATE KEY(`id`)
-    DISTRIBUTED BY HASH(`id`) BUCKETS 1;
-    ```
-
-3. 导入命令
+导入命令样例:
 
-    ```sql
-    CREATE ROUTINE LOAD demo.kafka_job20 ON routine_test20
-            PROPERTIES
-            (
-                "format" = "json"
-            )
-            FROM KAFKA
-            (
-                "kafka_broker_list" = "192.168.100.129:9092",
-                "kafka_topic" = "routineLoad21",
-                "property.security.protocol" = "ssl",
-                "property.ssl.ca.location" = "FILE:ca.pem",
-                "property.ssl.certificate.location" = "FILE:client.pem",
-                "property.ssl.key.location" = "FILE:client.key",
-                "property.ssl.key.password" = "ssl_passwd"
-            );  
-    ```
+```SQL
+CREATE ROUTINE LOAD demo.kafka_job20 ON routine_test20
+        PROPERTIES
+        (
+            "format" = "json"
+        )
+        FROM KAFKA
+        (
+            "kafka_broker_list" = "192.168.100.129:9092",
+            "kafka_topic" = "routineLoad21",
+            "property.security.protocol" = "ssl",
+            "property.ssl.ca.location" = "FILE:ca.pem",
+            "property.ssl.certificate.location" = "FILE:client.pem",
+            "property.ssl.key.location" = "FILE:client.key",
+            "property.ssl.key.password" = "ssl_passwd"
+        );  
+```
 
-4. 导入结果
+参数说明:
 
-    ```sql
-    mysql> select * from routine_test20;
-    +------+----------------+------+
-    | id   | name           | age  |
-    +------+----------------+------+
-    |    1 | Benjamin       |   18 |
-    |    2 | Emily          |   20 |
-    |    3 | Alexander      |   22 |
-    +------+----------------+------+
-    3 rows in set (0.01 sec)
-    ```
+| 参数                              | 介绍                                         
                |
+| --------------------------------- | 
------------------------------------------------------------ |
+| property.security.protocol        | 使用的安全协议,如上述的例子使用的是 SSL                   
  |
+| property.ssl.ca.location          | CA(Certificate Authority)证书的位置           
             |
+| property.ssl.certificate.location | (如果 Kafka server 端开启了 client 
认证才需要配置)Client 的 public key 的位置 |
+| property.ssl.key.location         | (如果 Kafka server 端开启了 client 
认证才需要配置)Client 的 private key 的位置 |
+| property.ssl.key.password         | (如果 Kafka server 端开启了 client 
认证才需要配置)Client 的 private key 的密码 |
 
 **导入 Kerberos 认证的 Kafka 数据**
 
-1. 导入数据样例
-
-    ```sql
-    { "id" : 1, "name" : "Benjamin", "age":18 }
-    { "id" : 2, "name" : "Emily", "age":20 }
-    { "id" : 3, "name" : "Alexander", "age":22 }
-    ```
-
-2. 建表结构
+导入命令样例:
 
-    ```sql
-    CREATE TABLE demo.routine_test21 (
-        id      INT            NOT NULL  COMMENT "id",
-        name    VARCHAR(30)    NOT NULL  COMMENT "name",
-        age     INT                      COMMENT "age"
-    )
-    DUPLICATE KEY(`id`)
-    DISTRIBUTED BY HASH(`id`) BUCKETS 1;
-    ```
-
-3. 导入命令
-
-    ```sql
-    CREATE ROUTINE LOAD demo.kafka_job21 ON routine_test21
-    PROPERTIES
-    (
-        "format" = "json"
-    )
-    FROM KAFKA
-    (
-        "kafka_broker_list" = "192.168.100.129:9092",
-        "kafka_topic" = "routineLoad21",
-        "property.security.protocol" = "SASL_PLAINTEXT",
-        "property.sasl.kerberos.service.name" = "kafka",
-        "property.sasl.kerberos.keytab"="/path/to/kafka_client.keytab",
-        "property.sasl.kerberos.principal" = 
"clients/stream.dt.lo...@example.com"
-    );  
-    ```
-
-4. 导入结果
-
-    ```sql
-    mysql> select * from routine_test21;
-    +------+----------------+------+
-    | id   | name           | age  |
-    +------+----------------+------+
-    |    1 | Benjamin       |   18 |
-    |    2 | Emily          |   20 |
-    |    3 | Alexander      |   22 |
-    +------+----------------+------+
-    3 rows in set (0.01 sec)
-    ```
-
-**导入 PLAIN 认证的 Kafka 集群**
-
-1. 导入数据样例
+```SQL
+CREATE ROUTINE LOAD demo.kafka_job21 ON routine_test21
+        PROPERTIES
+        (
+            "format" = "json"
+        )
+        FROM KAFKA
+        (
+            "kafka_broker_list" = "192.168.100.129:9092",
+            "kafka_topic" = "routineLoad21",
+            "property.security.protocol" = "SASL_PLAINTEXT",
+            "property.sasl.kerberos.service.name" = "kafka",
+            
"property.sasl.kerberos.keytab"="/opt/third/kafka/kerberos/kafka_client.keytab",
+            "property.sasl.kerberos.principal" = 
"clients/stream.dt.lo...@example.com"
+        );  
+```
 
-    ```sql
-    { "id" : 1, "name" : "Benjamin", "age":18 }
-    { "id" : 2, "name" : "Emily", "age":20 }
-    { "id" : 3, "name" : "Alexander", "age":22 }
-    ```
+参数说明:
 
-2. 建表结构
+| 参数                                | 介绍                                       
         |
+| ----------------------------------- | 
--------------------------------------------------- |
+| property.security.protocol          | 使用的安全协议,如上述的例子使用的是 SASL_PLAINTEXT |
+| property.sasl.kerberos.service.name | 指定 broker service name,默认是 Kafka       
       |
+| property.sasl.kerberos.keytab       | keytab 文件的位置                           
        |
+| property.sasl.kerberos.principal    | 指定 kerberos principal                  
           |
 
-    ```sql
-    CREATE TABLE demo.routine_test22 (
-        id      INT            NOT NULL  COMMENT "id",
-        name    VARCHAR(30)    NOT NULL  COMMENT "name",
-        age     INT                      COMMENT "age"
-    )
-    DUPLICATE KEY(`id`)
-    DISTRIBUTED BY HASH(`id`) BUCKETS 1;
-    ```
+导入 PLAIN 认证的 Kafka 集群
 
-3. 导入命令
+1. 导入命令样例:
 
-    ```sql
-    CREATE ROUTINE LOAD demo.kafka_job22 ON routine_test22
-            PROPERTIES
-            (
-                "format" = "json"
-            )
-            FROM KAFKA
-            (
-                "kafka_broker_list" = "192.168.100.129:9092",
-                "kafka_topic" = "routineLoad22",
-                "property.security.protocol"="SASL_PLAINTEXT",
-                "property.sasl.mechanism"="PLAIN",
-                "property.sasl.username"="admin",
-                "property.sasl.password"="admin"
-            );  
-    ```
+```SQL
+CREATE ROUTINE LOAD demo.kafka_job22 ON routine_test22
+        PROPERTIES
+        (
+            "format" = "json"
+        )
+        FROM KAFKA
+        (
+            "kafka_broker_list" = "192.168.100.129:9092",
+            "kafka_topic" = "routineLoad22",
+            "property.security.protocol"="SASL_PLAINTEXT",
+            "property.sasl.mechanism"="PLAIN",
+            "property.sasl.username"="admin",
+            "property.sasl.password"="admin"
+        );  
+```
 
-4. 导入结果
+参数说明:
 
-    ```sql
-    mysql> select * from routine_test22;
-    +------+----------------+------+
-    | id   | name           | age  |
-    +------+----------------+------+
-    |    1 | Benjamin       |   18 |
-    |    2 | Emily          |   20 |
-    |    3 | Alexander      |   22 |
-    +------+----------------+------+
-    3 rows in set (0.02 sec)
-    ```
+| 参数                       | 介绍                                                
|
+| -------------------------- | 
--------------------------------------------------- |
+| property.security.protocol | 使用的安全协议,如上述的例子使用的是 SASL_PLAINTEXT |
+| property.sasl.mechanism    | 指定 SASL 认证机制为 PLAIN                          |
+| property.sasl.username     | SASL 的用户名                                       
|
+| property.sasl.password     | SASL 的密码                                        
 |
 
 ### 一流多表导入
 
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/data-operate/import/import-way/stream-load-manual.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/data-operate/import/import-way/stream-load-manual.md
index 35385151824..70f6ae67ed0 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/data-operate/import/import-way/stream-load-manual.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/data-operate/import/import-way/stream-load-manual.md
@@ -358,7 +358,11 @@ Stream Load 是一种同步的导入方式,导入结果会通过创建导入
 | ---------------------- | 
------------------------------------------------------------ |
 | TxnId                  | 导入事务的 ID                                            
    |
 | Label                  | 导入作业的 label,通过 -H "label:<label_id>" 指定            |
-| Status                 | 导入的最终状态 Success:表示导入成功 Publish 
Timeout:该状态也表示导入已经完成,只是数据可能会延迟可见,无需重试 Label Already Exists:Label 重复,需要更换 
labelFail:导入失败 |
+| Status                 | 导入的最终状态                                             
 |
+|                        | - Success:表示导入成功                                    
 |
+|                        | - Publish Timeout:该状态也表示导入已经完成,但数据可能会延迟可见,无需重试 |
+|                        | - Label Already Exists:Label 重复,需要更换 label         |
+|                        | - Fail:导入失败                                         
   |
 | ExistingJobStatus      | 已存在的 Label 对应的导入作业的状态。这个字段只有在当 Status 为 "Label 
Already Exists" 时才会显示。用户可以通过这个状态,知晓已存在 Label 对应的导入作业的状态。"RUNNING" 
表示作业还在执行,"FINISHED" 表示作业成功。 |
 | Message                | 导入错误信息                                              
   |
 | NumberTotalRows        | 导入总处理的行数                                            
 |
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/ecosystem/doris-streamloader.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/ecosystem/doris-streamloader.md
index e083e3686b5..619d4d51ecb 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/ecosystem/doris-streamloader.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/ecosystem/doris-streamloader.md
@@ -93,11 +93,6 @@ doris-streamloader --source_file={FILE_LIST} 
--url={FE_OR_BE_SERVER_URL}:{PORT}
   doris-streamloader --source_file="dir1,dir2,dir3" 
--url="http://localhost:8330"; --header="column_separator:|?columns:col1,col2" 
--db="testdb" --table="testtbl" 
    ```
 
-:::tip 
-当需要多个文件导入时,使用 Doris Streamloader 也只会产生一个版本号 
-:::
-
-
 
 **2.** `STREAMLOAD_HEADER` **支持 Stream Load 的所有参数,多个参数之间用  '?' 分隔。**
 
diff --git 
a/versioned_docs/version-2.0/data-operate/import/stream-load-manual.md 
b/versioned_docs/version-2.0/data-operate/import/stream-load-manual.md
index aa4498c0f3d..1bb334ae4cb 100644
--- a/versioned_docs/version-2.0/data-operate/import/stream-load-manual.md
+++ b/versioned_docs/version-2.0/data-operate/import/stream-load-manual.md
@@ -386,48 +386,6 @@ The return result parameters are explained in the 
following table:
 
 Users can access the ErrorURL to review data that failed to import due to 
issues with data quality. By executing the command `curl "<ErrorURL>"`, users 
can directly retrieve information about the erroneous data.
 
-## Application of Table Value Function in Stream Load - http_stream Mode
-
-Leveraging the recently introduced functionality of Table Value Function (TVF) 
in Doris, Stream Load now allows the expression of import parameters through 
SQL statements. Specifically, a TVF named `http_stream` has been dedicated for 
Stream Load operations.
-
-:::tip
-
-When performing Stream Load using the TVF `http_stream`, the Rest API URL 
differs from the standard URL used for regular Stream Load imports.
-
-- Standard Stream Load URL:
-  `http://fe_host:http_port/api/{db}/{table}/_stream_load`
-- URL for Stream Load using TVF `http_stream`:
-  `http://fe_host:http_port/api/_http_stream`
-
-:::
-
-Using curl for Stream Load in http_stream Mode:
-
-```shell
-curl --location-trusted -u user:passwd [-H "sql: ${load_sql}"...] -T data.file 
-XPUT http://fe_host:http_port/api/_http_stream
-```
-
-Adding a SQL parameter in the header to replace the previous parameters such 
as `column_separator`, `line_delimiter`, `where`, `columns`, etc., makes it 
very convenient to use.
-
-Example of load SQL:
-
-```shell
-insert into db.table (col, ...) select stream_col, ... from 
http_stream("property1"="value1");
-```
-
-http_stream parameter:
-
-- "column_separator" = ","
-
-- "format" = "CSV"
-- ...
-
-For example:
-
-```Plain
-curl  --location-trusted -u root: -T test.csv  -H "sql:insert into 
demo.example_tbl_1(user_id, age, cost) select c1, c4, c7 * 2 from 
http_stream(\"format\" = \"CSV\", \"column_separator\" = \",\" ) where age >= 
30"  http://127.0.0.1:28030/api/_http_stream
-```
-
 ## Load example
 
 ### Setting load timeout and maximum size
@@ -1279,32 +1237,6 @@ Stream load uses HTTP protocol, so all parameters 
related to import tasks are se
 
    eg: `curl  --location-trusted -u root: -H "partial_columns:true" -H 
"column_separator:," -H "columns:id,balance,last_access_time" -T /tmp/test.csv 
http://127.0.0.1:48037/api/db1/user_profile/_stream_load`
 
-
-### Use stream load with SQL
-
-You can add a `sql` parameter to the `Header` to replace the 
`column_separator`, `line_delimiter`, `where`, `columns` in the previous 
parameter, which is convenient to use.
-
-```
-curl --location-trusted -u user:passwd [-H "sql: ${load_sql}"...] -T data.file 
-XPUT http://fe_host:http_port/api/_http_stream
-
-
-# -- load_sql
-# insert into db.table (col, ...) select stream_col, ... from 
http_stream("property1"="value1");
-
-# http_stream
-# (
-#     "column_separator" = ",",
-#     "format" = "CSV",
-#     ...
-# )
-```
-
-Examples:
-
-```
-curl  --location-trusted -u root: -T test.csv  -H "sql:insert into 
demo.example_tbl_1(user_id, age, cost) select c1, c4, c7 * 2 from 
http_stream("format" = "CSV", "column_separator" = "," ) where age >= 30"  
http://127.0.0.1:28030/api/_http_stream
-```
-
 ### Return results
 
 Since Stream load is a synchronous import method, the result of the import is 
directly returned to the user by creating the return value of the import.
diff --git 
a/versioned_docs/version-2.1/data-operate/import/import-way/routine-load-manual.md
 
b/versioned_docs/version-2.1/data-operate/import/import-way/routine-load-manual.md
index 37aeee75f55..2d7d67d9772 100644
--- 
a/versioned_docs/version-2.1/data-operate/import/import-way/routine-load-manual.md
+++ 
b/versioned_docs/version-2.1/data-operate/import/import-way/routine-load-manual.md
@@ -311,65 +311,19 @@ The modules for creating a loading job are explained as 
follows:
 
 **01 FE Configuration Parameters**
 
-**max_routine_load_task_concurrent_num**
-
-- Default Value: 256
-
-- Dynamic Configuration: Yes
-
-- FE Master Exclusive: Yes
-
-- Parameter Description: Limits the maximum number of concurrent subtasks for 
Routine Load jobs. It is recommended to keep it at the default value. Setting 
it too high may result in excessive concurrent tasks and resource consumption.
-
-**max_routine_load_task_num_per_be**
-
-- Default Value: 1024
-
-- Dynamic Configuration: Yes
-
-- FE Master Exclusive: Yes
-
-- Parameter Description: Limits the maximum number of concurrent Routine Load 
tasks per backend (BE). `max_routine_load_task_num_per_be` should be smaller 
than the `routine_load_thread_pool_size` parameter.
-
-**max_routine_load_job_num**
-
-- Default Value: 100
-
-- Dynamic Configuration: Yes
-
-- FE Master Exclusive: Yes
-
-- Parameter Description: Limits the maximum number of Routine Load jobs, 
including those in NEED_SCHEDULED, RUNNING, and PAUSE states.
-
-**max_tolerable_backend_down_num**
-
-- Default Value: 0
-
-- Dynamic Configuration: Yes
-
-- FE Master Exclusive: Yes
-
-- Parameter Description: If any BE goes down, Routine Load cannot 
automatically recover. Under certain conditions, Doris can reschedule PAUSED 
tasks and transition them to the RUNNING state. Setting this parameter to 0 
means that re-scheduling is only allowed when all BE nodes are in the alive 
state.
-
-**period_of_auto_resume_min**
-
-- Default Value: 5 (minutes)
-
-- Dynamic Configuration: Yes
-
-- FE Master Exclusive: Yes
-
-- Parameter Description: The period for automatically resuming Routine Load.
+| Parameter Name                          | Default Value | Dynamic 
Configuration | FE Master Exclusive Configuration | Description                 
                                                                    |
+|-----------------------------------------|---------------|-----------------------|----------------------------------|-------------------------------------------------------------------------------------------------|
+| max_routine_load_task_concurrent_num   | 256           | Yes                 
  | Yes                              | Limits the maximum number of concurrent 
subtasks for Routine Load jobs. It is recommended to maintain the default 
value. If set too high, it may lead to excessive concurrent tasks, consuming 
cluster resources. |
+| max_routine_load_task_num_per_be       | 1024          | Yes                 
  | Yes                              | The maximum number of concurrent Routine 
Load tasks allowed per BE. `max_routine_load_task_num_per_be` should be less 
than `routine_load_thread_pool_size`. |
+| max_routine_load_job_num                | 100           | Yes                
   | Yes                              | Limits the maximum number of Routine 
Load jobs, including NEED_SCHEDULED, RUNNING, and PAUSE. |
+| max_tolerable_backend_down_num          | 0             | Yes                
   | Yes                              | If any BE is down, Routine Load cannot 
automatically recover. Under certain conditions, Doris can reschedule PAUSED 
tasks to RUNNING state. A value of 0 means that rescheduling is only allowed 
when all BE nodes are alive. |
+| period_of_auto_resume_min               | 5 (minutes)   | Yes                
   | Yes                              | The period for automatically resuming 
Routine Load. |
 
 **02 BE Configuration Parameters**
 
-**max_consumer_num_per_group**
-
-- Default Value: 3
-
-- Dynamic Configuration: Yes
-
-- Description: Specifies the maximum number of consumers generated per 
subtask. For Kafka data sources, a consumer can consume one or multiple Kafka 
partitions. For example, if a task needs to consume 6 Kafka partitions, it will 
generate 3 consumers, with each consumer consuming 2 partitions. If there are 
only 2 partitions, it will generate 2 consumers, with each consumer consuming 1 
partition.
+| Parameter Name                     | Default Value | Dynamic Configuration | 
Description                                                                     
                                      |
+|------------------------------------|---------------|-----------------------|-----------------------------------------------------------------------------------------------------------------------|
+| max_consumer_num_per_group         | 3             | Yes                   | 
The maximum number of consumers that can be generated for a subtask to consume 
data. For Kafka data sources, a consumer may consume one or more Kafka 
partitions. If a task needs to consume 6 Kafka partitions, it will generate 3 
consumers, each consuming 2 partitions. If there are only 2 partitions, it will 
generate only 2 consumers, each consuming 1 partition. |
 
 ### Load Configuration Parameters
 
@@ -1637,171 +1591,97 @@ The columns in the result set provide the following 
information:
 
 ### Kafka Security Authentication
 
-**Loading Kafka data with SSL authentication**
+**Loading Kafka Data with SSL Authentication**
 
-1. Loading sample data:
+Example load command:
 
-    ```sql
-    { "id" : 1, "name" : "Benjamin", "age":18 }
-    { "id" : 2, "name" : "Emily", "age":20 }
-    { "id" : 3, "name" : "Alexander", "age":22 }
-    ```
-
-2. Create table:
-
-    ```sql
-    CREATE TABLE demo.routine_test20 (
-        id      INT            NOT NULL  COMMENT "id",
-        name    VARCHAR(30)    NOT NULL  COMMENT "name",
-        age     INT                      COMMENT "age"
-    )
-    DUPLICATE KEY(`id`)
-    DISTRIBUTED BY HASH(`id`) BUCKETS 1;
-    ```
-
-3. Load command:
-
-    ```sql
-    CREATE ROUTINE LOAD demo.kafka_job20 ON routine_test20
-            PROPERTIES
-            (
-                "format" = "json"
-            )
-            FROM KAFKA
-            (
-                "kafka_broker_list" = "192.168.100.129:9092",
-                "kafka_topic" = "routineLoad21",
-                "property.security.protocol" = "ssl",
-                "property.ssl.ca.location" = "FILE:ca.pem",
-                "property.ssl.certificate.location" = "FILE:client.pem",
-                "property.ssl.key.location" = "FILE:client.key",
-                "property.ssl.key.password" = "ssl_passwd"
-            );  
-    ```
-
-4. Load result:
-
-    ```sql
-    mysql> select * from routine_test20;
-    +------+----------------+------+
-    | id   | name           | age  |
-    +------+----------------+------+
-    |    1 | Benjamin       |   18 |
-    |    2 | Emily          |   20 |
-    |    3 | Alexander      |   22 |
-    +------+----------------+------+
-    3 rows in set (0.01 sec)
-    ```
-
-**Loading Kafka data with Kerberos authentication**
-
-1. Loading sample data:
-
-    ```sql
-    { "id" : 1, "name" : "Benjamin", "age":18 }
-    { "id" : 2, "name" : "Emily", "age":20 }
-    { "id" : 3, "name" : "Alexander", "age":22 }
-    ```
+```SQL
+CREATE ROUTINE LOAD demo.kafka_job20 ON routine_test20
+        PROPERTIES
+        (
+            "format" = "json"
+        )
+        FROM KAFKA
+        (
+            "kafka_broker_list" = "192.168.100.129:9092",
+            "kafka_topic" = "routineLoad21",
+            "property.security.protocol" = "ssl",
+            "property.ssl.ca.location" = "FILE:ca.pem",
+            "property.ssl.certificate.location" = "FILE:client.pem",
+            "property.ssl.key.location" = "FILE:client.key",
+            "property.ssl.key.password" = "ssl_passwd"
+        );  
+```
 
-2. Create table:
+Parameter descriptions:
 
-    ```sql
-    CREATE TABLE demo.routine_test21 (
-        id      INT            NOT NULL  COMMENT "id",
-        name    VARCHAR(30)    NOT NULL  COMMENT "name",
-        age     INT                      COMMENT "age"
-    )
-    DUPLICATE KEY(`id`)
-    DISTRIBUTED BY HASH(`id`) BUCKETS 1;
-    ```
+| Parameter                          | Description                             
                     |
+|------------------------------------|--------------------------------------------------------------|
+| property.security.protocol         | The security protocol used, in this 
example it is SSL       |
+| property.ssl.ca.location           | The location of the CA (Certificate 
Authority) certificate   |
+| property.ssl.certificate.location  | The location of the Client's public key 
(required if client authentication is enabled on the Kafka server) |
+| property.ssl.key.location          | The location of the Client's private 
key (required if client authentication is enabled on the Kafka server) |
+| property.ssl.key.password          | The password for the Client's private 
key (required if client authentication is enabled on the Kafka server) |
 
-3. Load command:
+**Loading Kafka Data with Kerberos Authentication**
 
-    ```sql
-    CREATE ROUTINE LOAD demo.kafka_job21 ON routine_test21
-    PROPERTIES
-    (
-        "format" = "json"
-    )
-    FROM KAFKA
-    (
-        "kafka_broker_list" = "192.168.100.129:9092",
-        "kafka_topic" = "routineLoad21",
-        "property.security.protocol" = "SASL_PLAINTEXT",
-        "property.sasl.kerberos.service.name" = "kafka",
-        "property.sasl.kerberos.keytab"="/path/to/kafka_client.keytab",
-        "property.sasl.kerberos.principal" = 
"clients/stream.dt.lo...@example.com"
-    );  
-    ```
-
-4. Load result:
-
-    ```sql
-    mysql> select * from routine_test21;
-    +------+----------------+------+
-    | id   | name           | age  |
-    +------+----------------+------+
-    |    1 | Benjamin       |   18 |
-    |    2 | Emily          |   20 |
-    |    3 | Alexander      |   22 |
-    +------+----------------+------+
-    3 rows in set (0.01 sec)
-    ```
+Example load command:
 
-**Loading Kafka data with PLAIN authentication in Kafka cluster**
-
-1. Loading sample data:
+```SQL
+CREATE ROUTINE LOAD demo.kafka_job21 ON routine_test21
+        PROPERTIES
+        (
+            "format" = "json"
+        )
+        FROM KAFKA
+        (
+            "kafka_broker_list" = "192.168.100.129:9092",
+            "kafka_topic" = "routineLoad21",
+            "property.security.protocol" = "SASL_PLAINTEXT",
+            "property.sasl.kerberos.service.name" = "kafka",
+            
"property.sasl.kerberos.keytab"="/opt/third/kafka/kerberos/kafka_client.keytab",
+            "property.sasl.kerberos.principal" = 
"clients/stream.dt.lo...@example.com"
+        );  
+```
 
-    ```sql
-    { "id" : 1, "name" : "Benjamin", "age":18 }
-    { "id" : 2, "name" : "Emily", "age":20 }
-    { "id" : 3, "name" : "Alexander", "age":22 }
-    ```
+Parameter descriptions:
 
-2. Create table:
+| Parameter                           | Description                            
                   |
+|-------------------------------------|-----------------------------------------------------------|
+| property.security.protocol          | The security protocol used, in this 
example it is SASL_PLAINTEXT |
+| property.sasl.kerberos.service.name | Specifies the broker service name, 
default is Kafka       |
+| property.sasl.kerberos.keytab       | The location of the keytab file        
                   |
+| property.sasl.kerberos.principal    | Specifies the Kerberos principal       
                   |
 
-    ```sql
-    CREATE TABLE demo.routine_test22 (
-        id      INT            NOT NULL  COMMENT "id",
-        name    VARCHAR(30)    NOT NULL  COMMENT "name",
-        age     INT                      COMMENT "age"
-    )
-    DUPLICATE KEY(`id`)
-    DISTRIBUTED BY HASH(`id`) BUCKETS 1;
-    ```
+**Loading Kafka Cluster with PLAIN Authentication**
 
-3. Load command:
+1. Example load command:
 
-    ```sql
-    CREATE ROUTINE LOAD demo.kafka_job22 ON routine_test22
-            PROPERTIES
-            (
-                "format" = "json"
-            )
-            FROM KAFKA
-            (
-                "kafka_broker_list" = "192.168.100.129:9092",
-                "kafka_topic" = "routineLoad22",
-                "property.security.protocol"="SASL_PLAINTEXT",
-                "property.sasl.mechanism"="PLAIN",
-                "property.sasl.username"="admin",
-                "property.sasl.password"="admin"
-            );  
-    ```
+```SQL
+CREATE ROUTINE LOAD demo.kafka_job22 ON routine_test22
+        PROPERTIES
+        (
+            "format" = "json"
+        )
+        FROM KAFKA
+        (
+            "kafka_broker_list" = "192.168.100.129:9092",
+            "kafka_topic" = "routineLoad22",
+            "property.security.protocol"="SASL_PLAINTEXT",
+            "property.sasl.mechanism"="PLAIN",
+            "property.sasl.username"="admin",
+            "property.sasl.password"="admin"
+        );  
+```
 
-4. Load result
+Parameter descriptions:
 
-    ```sql
-    mysql> select * from routine_test22;
-    +------+----------------+------+
-    | id   | name           | age  |
-    +------+----------------+------+
-    |    1 | Benjamin       |   18 |
-    |    2 | Emily          |   20 |
-    |    3 | Alexander      |   22 |
-    +------+----------------+------+
-    3 rows in set (0.02 sec)
-    ```
+| Parameter                          | Description                             
                  |
+|------------------------------------|-----------------------------------------------------------|
+| property.security.protocol         | The security protocol used, in this 
example it is SASL_PLAINTEXT |
+| property.sasl.mechanism           | Specifies the SASL authentication 
mechanism as PLAIN      |
+| property.sasl.username            | The username for SASL                    
                |
+| property.sasl.password            | The password for SASL                    
                |
 
 ### Single-task Loading to Multiple Tables  
 
diff --git 
a/versioned_docs/version-3.0/data-operate/import/import-way/routine-load-manual.md
 
b/versioned_docs/version-3.0/data-operate/import/import-way/routine-load-manual.md
index 37aeee75f55..2d7d67d9772 100644
--- 
a/versioned_docs/version-3.0/data-operate/import/import-way/routine-load-manual.md
+++ 
b/versioned_docs/version-3.0/data-operate/import/import-way/routine-load-manual.md
@@ -311,65 +311,19 @@ The modules for creating a loading job are explained as 
follows:
 
 **01 FE Configuration Parameters**
 
-**max_routine_load_task_concurrent_num**
-
-- Default Value: 256
-
-- Dynamic Configuration: Yes
-
-- FE Master Exclusive: Yes
-
-- Parameter Description: Limits the maximum number of concurrent subtasks for 
Routine Load jobs. It is recommended to keep it at the default value. Setting 
it too high may result in excessive concurrent tasks and resource consumption.
-
-**max_routine_load_task_num_per_be**
-
-- Default Value: 1024
-
-- Dynamic Configuration: Yes
-
-- FE Master Exclusive: Yes
-
-- Parameter Description: Limits the maximum number of concurrent Routine Load 
tasks per backend (BE). `max_routine_load_task_num_per_be` should be smaller 
than the `routine_load_thread_pool_size` parameter.
-
-**max_routine_load_job_num**
-
-- Default Value: 100
-
-- Dynamic Configuration: Yes
-
-- FE Master Exclusive: Yes
-
-- Parameter Description: Limits the maximum number of Routine Load jobs, 
including those in NEED_SCHEDULED, RUNNING, and PAUSE states.
-
-**max_tolerable_backend_down_num**
-
-- Default Value: 0
-
-- Dynamic Configuration: Yes
-
-- FE Master Exclusive: Yes
-
-- Parameter Description: If any BE goes down, Routine Load cannot 
automatically recover. Under certain conditions, Doris can reschedule PAUSED 
tasks and transition them to the RUNNING state. Setting this parameter to 0 
means that re-scheduling is only allowed when all BE nodes are in the alive 
state.
-
-**period_of_auto_resume_min**
-
-- Default Value: 5 (minutes)
-
-- Dynamic Configuration: Yes
-
-- FE Master Exclusive: Yes
-
-- Parameter Description: The period for automatically resuming Routine Load.
+| Parameter Name                          | Default Value | Dynamic 
Configuration | FE Master Exclusive Configuration | Description                 
                                                                    |
+|-----------------------------------------|---------------|-----------------------|----------------------------------|-------------------------------------------------------------------------------------------------|
+| max_routine_load_task_concurrent_num   | 256           | Yes                 
  | Yes                              | Limits the maximum number of concurrent 
subtasks for Routine Load jobs. It is recommended to maintain the default 
value. If set too high, it may lead to excessive concurrent tasks, consuming 
cluster resources. |
+| max_routine_load_task_num_per_be       | 1024          | Yes                 
  | Yes                              | The maximum number of concurrent Routine 
Load tasks allowed per BE. `max_routine_load_task_num_per_be` should be less 
than `routine_load_thread_pool_size`. |
+| max_routine_load_job_num                | 100           | Yes                
   | Yes                              | Limits the maximum number of Routine 
Load jobs, including NEED_SCHEDULED, RUNNING, and PAUSE. |
+| max_tolerable_backend_down_num          | 0             | Yes                
   | Yes                              | If any BE is down, Routine Load cannot 
automatically recover. Under certain conditions, Doris can reschedule PAUSED 
tasks to RUNNING state. A value of 0 means that rescheduling is only allowed 
when all BE nodes are alive. |
+| period_of_auto_resume_min               | 5 (minutes)   | Yes                
   | Yes                              | The period for automatically resuming 
Routine Load. |
 
 **02 BE Configuration Parameters**
 
-**max_consumer_num_per_group**
-
-- Default Value: 3
-
-- Dynamic Configuration: Yes
-
-- Description: Specifies the maximum number of consumers generated per 
subtask. For Kafka data sources, a consumer can consume one or multiple Kafka 
partitions. For example, if a task needs to consume 6 Kafka partitions, it will 
generate 3 consumers, with each consumer consuming 2 partitions. If there are 
only 2 partitions, it will generate 2 consumers, with each consumer consuming 1 
partition.
+| Parameter Name                     | Default Value | Dynamic Configuration | 
Description                                                                     
                                      |
+|------------------------------------|---------------|-----------------------|-----------------------------------------------------------------------------------------------------------------------|
+| max_consumer_num_per_group         | 3             | Yes                   | 
The maximum number of consumers that can be generated for a subtask to consume 
data. For Kafka data sources, a consumer may consume one or more Kafka 
partitions. If a task needs to consume 6 Kafka partitions, it will generate 3 
consumers, each consuming 2 partitions. If there are only 2 partitions, it will 
generate only 2 consumers, each consuming 1 partition. |
 
 ### Load Configuration Parameters
 
@@ -1637,171 +1591,97 @@ The columns in the result set provide the following 
information:
 
 ### Kafka Security Authentication
 
-**Loading Kafka data with SSL authentication**
+**Loading Kafka Data with SSL Authentication**
 
-1. Loading sample data:
+Example load command:
 
-    ```sql
-    { "id" : 1, "name" : "Benjamin", "age":18 }
-    { "id" : 2, "name" : "Emily", "age":20 }
-    { "id" : 3, "name" : "Alexander", "age":22 }
-    ```
-
-2. Create table:
-
-    ```sql
-    CREATE TABLE demo.routine_test20 (
-        id      INT            NOT NULL  COMMENT "id",
-        name    VARCHAR(30)    NOT NULL  COMMENT "name",
-        age     INT                      COMMENT "age"
-    )
-    DUPLICATE KEY(`id`)
-    DISTRIBUTED BY HASH(`id`) BUCKETS 1;
-    ```
-
-3. Load command:
-
-    ```sql
-    CREATE ROUTINE LOAD demo.kafka_job20 ON routine_test20
-            PROPERTIES
-            (
-                "format" = "json"
-            )
-            FROM KAFKA
-            (
-                "kafka_broker_list" = "192.168.100.129:9092",
-                "kafka_topic" = "routineLoad21",
-                "property.security.protocol" = "ssl",
-                "property.ssl.ca.location" = "FILE:ca.pem",
-                "property.ssl.certificate.location" = "FILE:client.pem",
-                "property.ssl.key.location" = "FILE:client.key",
-                "property.ssl.key.password" = "ssl_passwd"
-            );  
-    ```
-
-4. Load result:
-
-    ```sql
-    mysql> select * from routine_test20;
-    +------+----------------+------+
-    | id   | name           | age  |
-    +------+----------------+------+
-    |    1 | Benjamin       |   18 |
-    |    2 | Emily          |   20 |
-    |    3 | Alexander      |   22 |
-    +------+----------------+------+
-    3 rows in set (0.01 sec)
-    ```
-
-**Loading Kafka data with Kerberos authentication**
-
-1. Loading sample data:
-
-    ```sql
-    { "id" : 1, "name" : "Benjamin", "age":18 }
-    { "id" : 2, "name" : "Emily", "age":20 }
-    { "id" : 3, "name" : "Alexander", "age":22 }
-    ```
+```SQL
+CREATE ROUTINE LOAD demo.kafka_job20 ON routine_test20
+        PROPERTIES
+        (
+            "format" = "json"
+        )
+        FROM KAFKA
+        (
+            "kafka_broker_list" = "192.168.100.129:9092",
+            "kafka_topic" = "routineLoad21",
+            "property.security.protocol" = "ssl",
+            "property.ssl.ca.location" = "FILE:ca.pem",
+            "property.ssl.certificate.location" = "FILE:client.pem",
+            "property.ssl.key.location" = "FILE:client.key",
+            "property.ssl.key.password" = "ssl_passwd"
+        );  
+```
 
-2. Create table:
+Parameter descriptions:
 
-    ```sql
-    CREATE TABLE demo.routine_test21 (
-        id      INT            NOT NULL  COMMENT "id",
-        name    VARCHAR(30)    NOT NULL  COMMENT "name",
-        age     INT                      COMMENT "age"
-    )
-    DUPLICATE KEY(`id`)
-    DISTRIBUTED BY HASH(`id`) BUCKETS 1;
-    ```
+| Parameter                          | Description                             
                     |
+|------------------------------------|--------------------------------------------------------------|
+| property.security.protocol         | The security protocol used, in this 
example it is SSL       |
+| property.ssl.ca.location           | The location of the CA (Certificate 
Authority) certificate   |
+| property.ssl.certificate.location  | The location of the Client's public key 
(required if client authentication is enabled on the Kafka server) |
+| property.ssl.key.location          | The location of the Client's private 
key (required if client authentication is enabled on the Kafka server) |
+| property.ssl.key.password          | The password for the Client's private 
key (required if client authentication is enabled on the Kafka server) |
 
-3. Load command:
+**Loading Kafka Data with Kerberos Authentication**
 
-    ```sql
-    CREATE ROUTINE LOAD demo.kafka_job21 ON routine_test21
-    PROPERTIES
-    (
-        "format" = "json"
-    )
-    FROM KAFKA
-    (
-        "kafka_broker_list" = "192.168.100.129:9092",
-        "kafka_topic" = "routineLoad21",
-        "property.security.protocol" = "SASL_PLAINTEXT",
-        "property.sasl.kerberos.service.name" = "kafka",
-        "property.sasl.kerberos.keytab"="/path/to/kafka_client.keytab",
-        "property.sasl.kerberos.principal" = 
"clients/stream.dt.lo...@example.com"
-    );  
-    ```
-
-4. Load result:
-
-    ```sql
-    mysql> select * from routine_test21;
-    +------+----------------+------+
-    | id   | name           | age  |
-    +------+----------------+------+
-    |    1 | Benjamin       |   18 |
-    |    2 | Emily          |   20 |
-    |    3 | Alexander      |   22 |
-    +------+----------------+------+
-    3 rows in set (0.01 sec)
-    ```
+Example load command:
 
-**Loading Kafka data with PLAIN authentication in Kafka cluster**
-
-1. Loading sample data:
+```SQL
+CREATE ROUTINE LOAD demo.kafka_job21 ON routine_test21
+        PROPERTIES
+        (
+            "format" = "json"
+        )
+        FROM KAFKA
+        (
+            "kafka_broker_list" = "192.168.100.129:9092",
+            "kafka_topic" = "routineLoad21",
+            "property.security.protocol" = "SASL_PLAINTEXT",
+            "property.sasl.kerberos.service.name" = "kafka",
+            
"property.sasl.kerberos.keytab"="/opt/third/kafka/kerberos/kafka_client.keytab",
+            "property.sasl.kerberos.principal" = 
"clients/stream.dt.lo...@example.com"
+        );  
+```
 
-    ```sql
-    { "id" : 1, "name" : "Benjamin", "age":18 }
-    { "id" : 2, "name" : "Emily", "age":20 }
-    { "id" : 3, "name" : "Alexander", "age":22 }
-    ```
+Parameter descriptions:
 
-2. Create table:
+| Parameter                           | Description                            
                   |
+|-------------------------------------|-----------------------------------------------------------|
+| property.security.protocol          | The security protocol used, in this 
example it is SASL_PLAINTEXT |
+| property.sasl.kerberos.service.name | Specifies the broker service name, 
default is Kafka       |
+| property.sasl.kerberos.keytab       | The location of the keytab file        
                   |
+| property.sasl.kerberos.principal    | Specifies the Kerberos principal       
                   |
 
-    ```sql
-    CREATE TABLE demo.routine_test22 (
-        id      INT            NOT NULL  COMMENT "id",
-        name    VARCHAR(30)    NOT NULL  COMMENT "name",
-        age     INT                      COMMENT "age"
-    )
-    DUPLICATE KEY(`id`)
-    DISTRIBUTED BY HASH(`id`) BUCKETS 1;
-    ```
+**Loading Kafka Cluster with PLAIN Authentication**
 
-3. Load command:
+1. Example load command:
 
-    ```sql
-    CREATE ROUTINE LOAD demo.kafka_job22 ON routine_test22
-            PROPERTIES
-            (
-                "format" = "json"
-            )
-            FROM KAFKA
-            (
-                "kafka_broker_list" = "192.168.100.129:9092",
-                "kafka_topic" = "routineLoad22",
-                "property.security.protocol"="SASL_PLAINTEXT",
-                "property.sasl.mechanism"="PLAIN",
-                "property.sasl.username"="admin",
-                "property.sasl.password"="admin"
-            );  
-    ```
+```SQL
+CREATE ROUTINE LOAD demo.kafka_job22 ON routine_test22
+        PROPERTIES
+        (
+            "format" = "json"
+        )
+        FROM KAFKA
+        (
+            "kafka_broker_list" = "192.168.100.129:9092",
+            "kafka_topic" = "routineLoad22",
+            "property.security.protocol"="SASL_PLAINTEXT",
+            "property.sasl.mechanism"="PLAIN",
+            "property.sasl.username"="admin",
+            "property.sasl.password"="admin"
+        );  
+```
 
-4. Load result
+Parameter descriptions:
 
-    ```sql
-    mysql> select * from routine_test22;
-    +------+----------------+------+
-    | id   | name           | age  |
-    +------+----------------+------+
-    |    1 | Benjamin       |   18 |
-    |    2 | Emily          |   20 |
-    |    3 | Alexander      |   22 |
-    +------+----------------+------+
-    3 rows in set (0.02 sec)
-    ```
+| Parameter                          | Description                             
                  |
+|------------------------------------|-----------------------------------------------------------|
+| property.security.protocol         | The security protocol used, in this 
example it is SASL_PLAINTEXT |
+| property.sasl.mechanism           | Specifies the SASL authentication 
mechanism as PLAIN      |
+| property.sasl.username            | The username for SASL                    
                |
+| property.sasl.password            | The password for SASL                    
                |
 
 ### Single-task Loading to Multiple Tables  
 


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org
For additional commands, e-mail: commits-h...@doris.apache.org

Reply via email to