This is an automated email from the ASF dual-hosted git repository.

liaoxin pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/doris-website.git


The following commit(s) were added to refs/heads/master by this push:
     new 54370d6d6bf [docs](load) split insert into values and insert into 
select (#1379)
54370d6d6bf is described below

commit 54370d6d6bfdb1804401dad0bf17b9e70a7cae24
Author: Kaijie Chen <c...@apache.org>
AuthorDate: Wed Dec 18 14:17:46 2024 +0800

    [docs](load) split insert into values and insert into select (#1379)
---
 .../import/import-way/insert-into-manual.md        |  66 ++---
 .../import/import-way/insert-into-values-manual.md | 305 ++++++++++++++++++++
 .../import/import-way/insert-into-manual.md        |  82 ++----
 .../import/import-way/insert-into-values-manual.md | 309 +++++++++++++++++++++
 .../import/import-way/insert-into-manual.md        |  82 ++----
 .../import/import-way/insert-into-values-manual.md | 309 +++++++++++++++++++++
 .../import/import-way/insert-into-manual.md        |  82 ++----
 .../import/import-way/insert-into-values-manual.md | 309 +++++++++++++++++++++
 sidebars.json                                      |   1 +
 .../import/import-way/insert-into-manual.md        |  66 ++---
 .../import/import-way/insert-into-values-manual.md | 305 ++++++++++++++++++++
 .../import/import-way/insert-into-manual.md        |  66 ++---
 .../import/import-way/insert-into-values-manual.md | 305 ++++++++++++++++++++
 versioned_sidebars/version-2.1-sidebars.json       |   1 +
 versioned_sidebars/version-3.0-sidebars.json       |   1 +
 15 files changed, 1950 insertions(+), 339 deletions(-)

diff --git a/docs/data-operate/import/import-way/insert-into-manual.md 
b/docs/data-operate/import/import-way/insert-into-manual.md
index af01a9224c5..8a387e76f40 100644
--- a/docs/data-operate/import/import-way/insert-into-manual.md
+++ b/docs/data-operate/import/import-way/insert-into-manual.md
@@ -26,14 +26,10 @@ under the License.
 
 The INSERT INTO statement supports importing the results of a Doris query into 
another table. INSERT INTO is a synchronous import method, where the import 
result is returned after the import is executed. Whether the import is 
successful can be determined based on the returned result. INSERT INTO ensures 
the atomicity of the import task, meaning that either all the data is imported 
successfully or none of it is imported.
 
-There are primarily two main forms of the INSERT INTO command:
-
 - INSERT INTO tbl SELECT...
-- INSERT INTO tbl (col1, col2, ...) VALUES (1, 2, ...), (1,3, ...)
 
 ## Applicable scenarios
 
-1. If a user wants to import only a few test data records to verify the 
functionality of the Doris system, the INSERT INTO VALUES syntax is applicable. 
It is similar to the MySQL syntax. However, it is not recommended to use INSERT 
INTO VALUES in a production environment.
 2. If a user wants to perform ETL on existing data in a Doris table and then 
import it into a new Doris table, the INSERT INTO SELECT syntax is applicable.
 3. In conjunction with the Multi-Catalog external table mechanism, tables from 
MySQL or Hive systems can be mapped via Multi-Catalog. Then, data from external 
tables can be imported into Doris tables using the INSERT INTO SELECT syntax.
 4. Utilizing the Table Value Functions (TVFs), users can directly query data 
stored in object storage or files on HDFS as tables, with automatic column type 
inference. Then, data from external tables can be imported into Doris tables 
using the INSERT INTO SELECT syntax.
@@ -54,8 +50,6 @@ INSERT INTO requires INSERT permissions on the target table. 
You can grant permi
 
 ### Create an INSERT INTO job
 
-**INSERT INTO VALUES**
-
 1. Create a source table
 
 ```SQL
@@ -68,7 +62,7 @@ DUPLICATE KEY(user_id)
 DISTRIBUTED BY HASH(user_id) BUCKETS 10;
 ```
 
-2. Import data into the source table using `INSERT INTO VALUES` (not 
recommended for production environments).
+2. Import data into the source table using any load method. (Here we use 
`INSERT INTO VALUES` for example).
 
 ```SQL
 INSERT INTO testdb.test_table (user_id, name, age)
@@ -79,34 +73,13 @@ VALUES (1, "Emily", 25),
        (5, "Ava", 17);
 ```
 
-INSERT INTO is a synchronous import method, where the import result is 
directly returned to the user. You can enable [group 
commit](../import-way/group-commit-manual.md) to achieve high performance. 
-
-```JSON
-Query OK, 5 rows affected (0.308 sec)
-{'label':'label_3e52da787aab4222_9126d2fce8f6d1e5', 'status':'VISIBLE', 
'txnId':'9081'}
-```
-
-3. View imported data.
-
-```SQL
-MySQL> SELECT COUNT(*) FROM testdb.test_table;
-+----------+
-| count(*) |
-+----------+
-|        5 |
-+----------+
-1 row in set (0.179 sec)
-```
-
-**INSERT INTO SELECT**
-
-1. Building upon the above operations, create a new table as the target table 
(with the same schema as the source table).
+3. Building upon the above operations, create a new table as the target table 
(with the same schema as the source table).
 
 ```SQL
 CREATE TABLE testdb.test_table2 LIKE testdb.test_table;
 ```
 
-2. Ingest data into the new table using `INSERT INTO SELECT`.
+4. Ingest data into the new table using `INSERT INTO SELECT`.
 
 ```SQL
 INSERT INTO testdb.test_table2
@@ -115,21 +88,23 @@ Query OK, 3 rows affected (0.544 sec)
 {'label':'label_9c2bae970023407d_b2c5b78b368e78a7', 'status':'VISIBLE', 
'txnId':'9084'}
 ```
 
-3. View imported data.
+5. View imported data.
 
 ```SQL
-MySQL> SELECT COUNT(*) FROM testdb.test_table2;
-+----------+
-| count(*) |
-+----------+
-|        3 |
-+----------+
-1 row in set (0.071 sec)
+MySQL> SELECT * FROM testdb.test_table2 ORDER BY age;
++---------+--------+------+
+| user_id | name   | age  |
++---------+--------+------+
+|       5 | Ava    |   17 |
+|       1 | Emily  |   25 |
+|       3 | Olivia |   28 |
++---------+--------+------+
+3 rows in set (0.02 sec)
 ```
 
-4. You can use [JOB](../../scheduler/job-scheduler.md) make the INSERT 
operation execute asynchronously.
+6. You can use [JOB](../../scheduler/job-scheduler.md) make the INSERT 
operation execute asynchronously.
 
-5. Sources can be [tvf](../../../lakehouse/file.md) or tables in a 
[catalog](../../../lakehouse/database).
+7. Sources can be [tvf](../../../lakehouse/file.md) or tables in a 
[catalog](../../../lakehouse/database).
 
 ### View INSERT INTO jobs
 
@@ -165,15 +140,6 @@ INSERT INTO target_table SELECT ... FROM source_table;
 
 The SELECT statement above is similar to a regular SELECT query, allowing 
operations such as WHERE and JOIN.
 
-2. INSERT INTO VALUES
-
-INSERT INTO VALUES is typically used for testing purposes. It is not 
recommended for production environments.
-
-```SQL
-INSERT INTO target_table (col1, col2, ...)
-VALUES (val1, val2, ...), (val3, val4, ...), ...;
-```
-
 ### Parameter configuration
 
 **FE** **configuration**
@@ -194,7 +160,7 @@ enable_insert_strict
 
 - Default value: true
 - Description: If this is set to true, INSERT INTO will fail when the task 
involves invalid data. If set to false, INSERT INTO will ignore invalid rows, 
and the import will be considered successful as long as at least one row is 
imported successfully.
-- Explanation: INSERT INTO cannot control the error rate, so this parameter is 
used to either strictly check data quality or completely ignore invalid data. 
Common reasons for data invalidity include: source data column length exceeding 
destination column length, column type mismatch, partition mismatch, and column 
order mismatch.
+- Explanation: Until version 2.1.4. INSERT INTO cannot control the error rate, 
so this parameter is used to either strictly check data quality or completely 
ignore invalid data. Common reasons for data invalidity include: source data 
column length exceeding destination column length, column type mismatch, 
partition mismatch, and column order mismatch.
 
 insert_max_filter_ratio
 
diff --git a/docs/data-operate/import/import-way/insert-into-values-manual.md 
b/docs/data-operate/import/import-way/insert-into-values-manual.md
new file mode 100644
index 00000000000..1b51205512e
--- /dev/null
+++ b/docs/data-operate/import/import-way/insert-into-values-manual.md
@@ -0,0 +1,305 @@
+---
+{
+    "title": "Insert Into Values",
+    "language": "en"
+}
+---
+
+<!-- 
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+The INSERT INTO VALUES statement supports importing the results of a Doris 
query into another table. INSERT INTO VALUES is a synchronous import method, 
where the import result is returned after the import is executed. Whether the 
import is successful can be determined based on the returned result. INSERT 
INTO VALUES ensures the atomicity of the import task, meaning that either all 
the data is imported successfully or none of it is imported.
+
+- INSERT INTO tbl (col1, col2, ...) VALUES (1, 2, ...), (1,3, ...)
+
+## Applicable scenarios
+
+1. If a user wants to import only a few test data records to verify the 
functionality of the Doris system, the INSERT INTO VALUES syntax is applicable. 
It is similar to the MySQL syntax. However, it is not recommended to use INSERT 
INTO VALUES in a production environment.
+2. The performance of concurrent INSERT INTO VALUES jobs will be bottlenecked 
by commit stage. When loading large quantity of data, you can enable [group 
commit](../import-way/group-commit-manual.md) to achieve high performance. 
+
+## Implementation
+
+When using INSERT INTO VALUES, the import job needs to be initiated and 
submitted to the FE node using the MySQL protocol. The FE generates an 
execution plan, which includes query-related operators, with the last operator 
being the OlapTableSink. The OlapTableSink operator is responsible for writing 
the query result to the target table. The execution plan is then sent to the BE 
nodes for execution. Doris designates one BE node as the Coordinator, which 
receives and distributes the data t [...]
+
+## Get started
+
+An INSERT INTO VALUES job is submitted and transmitted using the MySQL 
protocol. The following example demonstrates submitting an import job using 
INSERT INTO VALUES through the MySQL command-line interface.
+
+### Preparation
+
+INSERT INTO VALUES requires INSERT permissions on the target table. You can 
grant permissions to user accounts using the GRANT command.
+
+### Create an INSERT INTO VALUES job
+
+**INSERT INTO VALUES**
+
+1. Create a source table
+
+```SQL
+CREATE TABLE testdb.test_table(
+    user_id            BIGINT       NOT NULL COMMENT "User ID",
+    name               VARCHAR(20)           COMMENT "User name",
+    age                INT                   COMMENT "User age"
+)
+DUPLICATE KEY(user_id)
+DISTRIBUTED BY HASH(user_id) BUCKETS 10;
+```
+
+2. Import data into the source table using `INSERT INTO VALUES` (not 
recommended for production environments).
+
+```SQL
+INSERT INTO testdb.test_table (user_id, name, age)
+VALUES (1, "Emily", 25),
+       (2, "Benjamin", 35),
+       (3, "Olivia", 28),
+       (4, "Alexander", 60),
+       (5, "Ava", 17);
+```
+
+INSERT INTO VALUES is a synchronous import method, where the import result is 
directly returned to the user.
+
+```JSON
+Query OK, 5 rows affected (0.308 sec)
+{'label':'label_26eebc33411f441c_b2b286730d495e2c', 'status':'VISIBLE', 
'txnId':'61071'}
+```
+
+3. View imported data.
+
+```SQL
+MySQL> SELECT COUNT(*) FROM testdb.test_table;
++----------+
+| count(*) |
++----------+
+|        5 |
++----------+
+1 row in set (0.179 sec)
+```
+
+### View INSERT INTO VALUES jobs
+
+You can use the `SHOW LOAD` command to view the completed INSERT INTO VALUES 
tasks.
+
+```SQL
+mysql> SHOW LOAD FROM testdb\G
+*************************** 1. row ***************************
+         JobId: 77172
+         Label: label_26eebc33411f441c_b2b286730d495e2c
+         State: FINISHED
+      Progress: Unknown id: 77172
+          Type: INSERT
+       EtlInfo: NULL
+      TaskInfo: cluster:N/A; timeout(s):14400; max_filter_ratio:0.0
+      ErrorMsg: NULL
+    CreateTime: 2024-11-20 16:44:08
+  EtlStartTime: 2024-11-20 16:44:08
+ EtlFinishTime: 2024-11-20 16:44:08
+ LoadStartTime: 2024-11-20 16:44:08
+LoadFinishTime: 2024-11-20 16:44:08
+           URL: 
+    JobDetails: {"Unfinished 
backends":{},"ScannedRows":0,"TaskNumber":0,"LoadBytes":0,"All 
backends":{},"FileNumber":0,"FileSize":0}
+ TransactionId: 61071
+  ErrorTablets: {}
+          User: root
+       Comment: 
+1 row in set (0.00 sec)
+```
+
+### Cancel INSERT INTO VALUES jobs
+
+You can cancel the currently executing INSERT INTO VALUES job via Ctrl-C.
+
+## Manual
+
+### Syntax
+
+INSERT INTO VALUES is typically used for testing purposes. It is not 
recommended for production environments.
+
+```SQL
+INSERT INTO target_table (col1, col2, ...)
+VALUES (val1, val2, ...), (val3, val4, ...), ...;
+```
+
+### Parameter configuration
+
+**FE** **configuration**
+
+insert_load_default_timeout_second
+
+- Default value: 14400s (4 hours)
+- Description: Timeout for import tasks, measured in seconds. If the import 
task does not complete within this timeout period, it will be canceled by the 
system and marked as CANCELLED.
+
+**Environment parameters**
+
+insert_timeout
+
+- Default value: 14400s (4 hours)
+- Description: Timeout for INSERT INTO VALUES as an SQL statement, measured in 
seconds. 
+
+enable_insert_strict
+
+- Default value: true
+- Description: If this is set to true, INSERT INTO VALUES will fail when the 
task involves invalid data. If set to false, INSERT INTO VALUES will ignore 
invalid rows, and the import will be considered successful as long as at least 
one row is imported successfully.
+- Explanation: Until version 2.1.4, INSERT INTO VALUES cannot control the 
error rate, so this parameter is used to either strictly check data quality or 
completely ignore invalid data. Common reasons for data invalidity include: 
source data column length exceeding destination column length, column type 
mismatch, partition mismatch, and column order mismatch.
+
+insert_max_filter_ratio
+
+- Default value: 1.0
+
+- Description: Since version 2.1.5. Only effective when `enable_insert_strict` 
is false. Used to control the error tolerance when using `INSERT INTO VALUES`. 
The default value is 1.0, which means all errors are tolerated. It can be a 
decimal between 0 and 1. It means that when the number of error rows exceeds 
this ratio, the INSERT task will fail.
+
+### Return values
+
+INSERT INTO VALUES is a SQL statement, and it returns a JSON string in its 
results.
+
+Parameters in the JSON string:
+
+| Parameter | Description                                                  |
+| --------- | ------------------------------------------------------------ |
+| Label     | Label of the import job: can be specified using "INSERT INTO tbl 
WITH LABEL label..." |
+| Status    | Visibility of the imported data: If it is visible, it will be 
displayed as "visible." If not, it will be displayed as "committed." In the 
"committed" state, the import is completed, but the data may be delayed in 
becoming visible. There is no need to retry in this case.`visible`: The import 
is successful and the data is visible.`committed`: The import is completed, but 
the data may be delayed in becoming visible. There is no need to retry in this 
case.Label Already Exists:  [...]
+| Err       | Error message                                                |
+| TxnId     | ID of the import transaction                                 |
+
+**Successful INSERT**
+
+```sql
+mysql> INSERT INTO test_table (user_id, name, age) VALUES (1, "Emily", 25), 
(2, "Benjamin", 35), (3, "Olivia", 28), (NULL, "Alexander", 60), (5, "Ava", 17);
+Query OK, 5 rows affected (0.05 sec)
+{'label':'label_26eebc33411f441c_b2b286730d495e2c', 'status':'VISIBLE', 
'txnId':'61071'}
+```
+
+`Query OK` indicates successful execution. `5 rows affected` indicates that a 
total of 5 rows of data have been imported. 
+
+**Successful INSERT with warnings**
+
+```sql
+mysql> INSERT INTO test_table (user_id, name, age) VALUES (1, "Emily", 25), 
(2, "Benjamin", 35), (3, "Olivia", 28), (NULL, "Alexander", 60), (5, "Ava", 17);
+Query OK, 4 rows affected, 1 warning (0.04 sec)
+{'label':'label_a8d99ae931194d2b_93357aac59981a18', 'status':'VISIBLE', 
'txnId':'61068'}
+```
+
+`Query OK` indicates successful execution. `4 rows affected` indicates that a 
total of 4 rows of data have been imported. `1 warnings` indicates the number 
of rows that were filtered out. 
+
+You can use the [SHOW 
LOAD](../../../sql-statements/data-modification/load-and-export/SHOW-LOAD.md) 
statement to view the filtered rows.
+
+The result of this statement will include a URL that can be used to query the 
error data. For more details, refer to the "View error rows" section below.
+
+```sql
+mysql> SHOW LOAD WHERE label="label_a8d99ae931194d2b_93357aac59981a18"\G
+*************************** 1. row ***************************
+         JobId: 77158
+         Label: label_a8d99ae931194d2b_93357aac59981a18
+         State: FINISHED
+      Progress: Unknown id: 77158
+          Type: INSERT
+       EtlInfo: NULL
+      TaskInfo: cluster:N/A; timeout(s):14400; max_filter_ratio:0.0
+      ErrorMsg: NULL
+    CreateTime: 2024-11-20 16:35:40
+  EtlStartTime: 2024-11-20 16:35:40
+ EtlFinishTime: 2024-11-20 16:35:40
+ LoadStartTime: 2024-11-20 16:35:40
+LoadFinishTime: 2024-11-20 16:35:40
+           URL: 
http://10.16.10.7:8743/api/_load_error_log?file=__shard_18/error_log_insert_stmt_a8d99ae931194d2b-93357aac59981a19_a8d99ae931194d2b_93357aac59981a19
+    JobDetails: {"Unfinished 
backends":{},"ScannedRows":0,"TaskNumber":0,"LoadBytes":0,"All 
backends":{},"FileNumber":0,"FileSize":0}
+ TransactionId: 61068
+  ErrorTablets: {}
+          User: root
+       Comment: 
+1 row in set (0.00 sec)
+```
+
+**Successful INSERT with committed status**
+
+```sql
+mysql> INSERT INTO test_table (user_id, name, age) VALUES (1, "Emily", 25), 
(2, "Benjamin", 35), (3, "Olivia", 28), (4, "Alexander", 60), (5, "Ava", 17);
+Query OK, 5 rows affected (0.04 sec)
+{'label':'label_78bf5396d9594d4d_a8d9a914af40f73d', 'status':'COMMITTED', 
'txnId':'61074'}
+```
+
+The invisible state of data is temporary, and the data will eventually become 
visible. 
+
+You can check the visibility status of a batch of data using the [SHOW 
TRANSACTION](../../../sql-manual/sql-statements/transaction/SHOW-TRANSACTION.md)
 statement.
+
+If the `TransactionStatus` column in the result is `visible`, it indicates 
that the data is visible.
+
+```sql
+mysql> SHOW TRANSACTION WHERE id=61074\G
+*************************** 1. row ***************************
+     TransactionId: 61074
+             Label: label_78bf5396d9594d4d_a8d9a914af40f73d
+       Coordinator: FE: 10.16.10.7
+ TransactionStatus: VISIBLE
+ LoadJobSourceType: INSERT_STREAMING
+       PrepareTime: 2024-11-20 16:51:54
+     PreCommitTime: NULL
+        CommitTime: 2024-11-20 16:51:54
+       PublishTime: 2024-11-20 16:51:54
+        FinishTime: 2024-11-20 16:51:54
+            Reason: 
+ErrorReplicasCount: 0
+        ListenerId: -1
+         TimeoutMs: 14400000
+            ErrMsg: 
+1 row in set (0.00 sec)
+```
+
+**Non-empty result set but failed INSERT**
+
+Failed execution means that no data was successfully imported. An error 
message will be returned:
+
+```sql
+mysql> INSERT INTO test_table (user_id, name, age) VALUES (1, "Emily", 25), 
(2, "Benjamin", 35), (3, "Olivia", 28), (NULL, "Alexander", 60), (5, "Ava", 17);
+ERROR 1105 (HY000): errCode = 2, detailMessage = Insert has too many filtered 
data 1/5 insert_max_filter_ratio is 0.100000. url: 
http://10.16.10.7:8747/api/_load_error_log?file=__shard_22/error_log_insert_stmt_5fafe6663e1a45e0-a666c1722ffc8c55_5fafe6663e1a45e0_a666c1722ffc8c55
+```
+
+`ERROR 1105 (HY000): errCode = 2, detailMessage = Insert has too many filtered 
data 1/5 insert_max_filter_ratio is 0.100000.` indicates the root cause for the 
failure. The URL provided in the error message can be used to locate the error 
data. For more details, refer to the "View error rows" section below.
+
+## Best practice
+
+### Data size
+
+INSERT INTO VALUES is usually used for test and demo, it is not recommended to 
load large quantity data with INSERT INTO VALUES.
+
+### View error rows
+
+When the INSERT INTO result includes a URL field, you can use the following 
command to view the error rows:
+
+```SQL
+SHOW LOAD WARNINGS ON "url";
+```
+
+Example:
+
+```sql
+mysql> SHOW LOAD WARNINGS ON 
"http://10.16.10.7:8743/api/_load_error_log?file=__shard_18/error_log_insert_stmt_a8d99ae931194d2b-93357aac59981a19_a8d99ae931194d2b_93357aac59981a19"\G
+*************************** 1. row ***************************
+         JobId: -1
+         Label: NULL
+ErrorMsgDetail: Reason: column_name[user_id], null value for not null column, 
type=BIGINT. src line []; 
+1 row in set (0.00 sec)
+```
+
+Common reasons for errors include: source data column length exceeding 
destination column length, column type mismatch, partition mismatch, and column 
order mismatch.
+
+You can control whether INSERT INTO ignores error rows by configuring the 
environment variable `enable_insert_strict`.
+
+## More help
+
+For more detailed syntax on INSERT INTO, refer to the [INSERT 
INTO](../../../sql-statements/data-modification/DML/INSERT.md) command manual. 
You can also type `HELP INSERT` at the MySQL client command line for further 
information.
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-operate/import/import-way/insert-into-manual.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-operate/import/import-way/insert-into-manual.md
index 33943c869b6..c306a77f585 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-operate/import/import-way/insert-into-manual.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-operate/import/import-way/insert-into-manual.md
@@ -27,21 +27,15 @@ under the License.
 
 INSERT INTO 支持将 Doris 查询的结果导入到另一个表中。INSERT INTO 
是一个同步导入方式,执行导入后返回导入结果。可以通过请求的返回判断导入是否成功。INSERT INTO 
可以保证导入任务的原子性,要么全部导入成功,要么全部导入失败。
 
-主要的 Insert Into 命令包含以下两种:
-
 - INSERT INTO tbl SELECT ...
 
-- INSERT INTO tbl (col1, col2, ...) VALUES (1, 2, ...), (1,3, ...)
-
 ## 使用场景
 
-1. 用户希望仅导入几条假数据,验证一下 Doris 系统的功能。此时适合使用 INSERT INTO VALUES 的语法,语法和 MySQL 
一样。不建议在生产环境中使用 INSERT INTO VALUES。
-
-2. 用户希望将已经在 Doris 表中的数据进行 ETL 转换并导入到一个新的 Doris 表中,此时适合使用 INSERT INTO SELECT 语法。
+1. 用户希望将已经在 Doris 表中的数据进行 ETL 转换并导入到一个新的 Doris 表中,此时适合使用 INSERT INTO SELECT 语法。
 
-3. 与 Multi-Catalog 外部表机制进行配合,如通过 Multi-Catalog 映射 MySQL 或者 Hive 系统中的表,然后通过 
INSERT INTO SELECT 语法将外部表中的数据导入到 Doris 表中存储。
+2. 与 Multi-Catalog 外部表机制进行配合,如通过 Multi-Catalog 映射 MySQL 或者 Hive 系统中的表,然后通过 
INSERT INTO SELECT 语法将外部表中的数据导入到 Doris 表中存储。
 
-4. 通过 Table Value Function(TVF)功能,可以直接将对象存储或 HDFS 上的文件作为 Table 
进行查询,并且支持自动的列类型推断。然后,通过 INSERT INTO SELECT 语法将外部表中的数据导入到 Doris 表中存储。
+3. 通过 Table Value Function(TVF)功能,可以直接将对象存储或 HDFS 上的文件作为 Table 
进行查询,并且支持自动的列类型推断。然后,通过 INSERT INTO SELECT 语法将外部表中的数据导入到 Doris 表中存储。
 
 ## 基本原理
 
@@ -59,8 +53,6 @@ INSERT INTO 需要对目标表的 INSERT 权限。如果没有 INSERT 权限,
 
 ### 创建导入作业
 
-**INSERT INTO VALUES**
-
 1. 创建源表
 
 ```sql
@@ -73,7 +65,7 @@ DUPLICATE KEY(user_id)
 DISTRIBUTED BY HASH(user_id) BUCKETS 10;
 ```
 
-2. 使用 INSERT INTO VALUES 向源表导入数据(不推荐在生产环境中使用)
+2. 使用任何方式向源表导入数据(这里以 INSERT INTO VALUES 为例)
 
 ```sql
 INSERT INTO testdb.test_table (user_id, name, age)
@@ -84,34 +76,13 @@ VALUES (1, "Emily", 25),
        (5, "Ava", 17);
 ```
 
-INSERT INTO 是一种同步导入方式,导入结果会直接返回给用户。可以打开 [group 
commit](../import-way/group-commit-manual.md) 达到更高的性能。
-
-```JSON
-Query OK, 5 rows affected (0.308 sec)
-{'label':'label_3e52da787aab4222_9126d2fce8f6d1e5', 'status':'VISIBLE', 
'txnId':'9081'}
-```
-
-3. 查看导入数据
-
-```sql
-MySQL> SELECT COUNT(*) FROM testdb.test_table;
-+----------+
-| count(*) |
-+----------+
-|        5 |
-+----------+
-1 row in set (0.179 sec)
-```
-
-**INSERT INTO SELECT**
-
-1. 在上述操作的基础上,创建一个新表作为目标表(其 schema 与源表相同)
+3. 在上述操作的基础上,创建一个新表作为目标表(其 schema 与源表相同)
 
 ```sql
 CREATE TABLE testdb.test_table2 LIKE testdb.test_table;
 ```
 
-2. 使用 INSERT INTO SELECT 导入到新表
+4. 使用 INSERT INTO SELECT 导入到新表
 
 ```sql
 INSERT INTO testdb.test_table2
@@ -120,21 +91,23 @@ Query OK, 3 rows affected (0.544 sec)
 {'label':'label_9c2bae970023407d_b2c5b78b368e78a7', 'status':'VISIBLE', 
'txnId':'9084'}
 ```
 
-3. 查看导入数据
+5. 查看导入数据
 
 ```sql
-MySQL> SELECT COUNT(*) FROM testdb.test_table2;
-+----------+
-| count(*) |
-+----------+
-|        3 |
-+----------+
-1 row in set (0.071 sec)
+MySQL> SELECT * FROM testdb.test_table2 ORDER BY age;
++---------+--------+------+
+| user_id | name   | age  |
++---------+--------+------+
+|       5 | Ava    |   17 |
+|       1 | Emily  |   25 |
+|       3 | Olivia |   28 |
++---------+--------+------+
+3 rows in set (0.02 sec)
 ```
 
-4. 可以使用 [JOB](../../scheduler/job-scheduler.md) 异步执行 INSERT。
+6. 可以使用 [JOB](../../scheduler/job-scheduler.md) 异步执行 INSERT。
 
-5. 数据源可以是 [tvf](../../../lakehouse/file.md) 或者 
[catalog](../../../lakehouse/database) 中的表。
+7. 数据源可以是 [tvf](../../../lakehouse/file.md) 或者 
[catalog](../../../lakehouse/database) 中的表。
 
 ### 查看导入作业
 
@@ -158,10 +131,6 @@ MySQL> SHOW LOAD FROM testdb;
 
 ### 导入命令
 
-INSERT INTO 导入语法如下:
-
-1. INSERT INTO SELECT
-
 INSERT INTO SELECT 一般用于将查询结果保存到目标表中。
 
 ```sql
@@ -170,15 +139,6 @@ INSERT INTO target_table SELECT ... FROM source_table;
 
 其中 SELECT 语句同一般的 SELECT 查询语句,可以包含 WHERE JOIN 等操作。
 
-2. INSERT INTO VALUES
-
-INSERT INTO VALUES 一般仅用于 Demo,不建议在生产环境使用。
-
-```sql
-INSERT INTO target_table (col1, col2, ...)
-VALUES (val1, val2, ...), (val3, val4, ...), ...;
-```
-
 ### 导入配置参数
 
 **01 FE 配置**
@@ -203,7 +163,7 @@ VALUES (val1, val2, ...), (val3, val4, ...), ...;
 
 - 参数描述:如果设置为 true,当 INSERT INTO 遇到不合格数据时导入会失败。如果设置为 false,INSERT INTO 
会忽略不合格的行,只要有一条数据被正确导入,导入就会成功。
 
-- 解释:INSERT INTO 
无法控制错误率,只能通过该参数设置为严格检查数据质量或完全忽略错误数据。常见的数据不合格的原因有:源数据列长度超过目的数据列长度、列类型不匹配、分区不匹配、列顺序不匹配等。
+- 解释:在 2.1.4 及以前的版本中。INSERT INTO 
无法控制错误率,只能通过该参数设置为严格检查数据质量或完全忽略错误数据。常见的数据不合格的原因有:源数据列长度超过目的数据列长度、列类型不匹配、分区不匹配、列顺序不匹配等。
 
 **insert_max_filter_ratio**
 
@@ -211,10 +171,6 @@ VALUES (val1, val2, ...), (val3, val4, ...), ...;
 
 - 参数描述:自 2.1.5 版本。仅当 `enable_insert_strict` 值为 false 时生效。用于控制当使用 `INSERT INTO 
FROM S3/HDFS/LOCAL()` 时,设定错误容忍率的。默认为 1.0 表示容忍所有错误。可以取值 0 ~ 1 
之间的小数。表示当错误行数超过该比例后,INSERT 任务会失败。
 
-**enable_nereids_dml_with_pipeline**
-
-  设置为 `true` 后,`insert into` 语句将尝试通过 Pipeline 引擎执行。详见[导入](../load-manual)文档。
-
 ### 导入返回值
 
 INSERT INTO 是一个 SQL 语句,其返回结果会根据查询结果的不同,分为以下几种:
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-operate/import/import-way/insert-into-values-manual.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-operate/import/import-way/insert-into-values-manual.md
new file mode 100644
index 00000000000..1c24bbf2a0f
--- /dev/null
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-operate/import/import-way/insert-into-values-manual.md
@@ -0,0 +1,309 @@
+---
+{
+    "title": "Insert Into Values",
+    "language": "zh-CN"
+}
+---
+
+<!-- 
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+INSERT INTO VALUES 支持将 Doris 查询的结果导入到另一个表中。INSERT INTO VALUES 
是一个同步导入方式,执行导入后返回导入结果。可以通过请求的返回判断导入是否成功。INSERT INTO VALUES 
可以保证导入任务的原子性,要么全部导入成功,要么全部导入失败。
+
+- INSERT INTO tbl (col1, col2, ...) VALUES (1, 2, ...), (1,3, ...)
+
+## 使用场景
+
+1. 用户希望仅导入几条假数据,验证一下 Doris 系统的功能。此时适合使用 INSERT INTO VALUES 的语法,语法和 MySQL 一样。
+2. 并发的 INSERT INTO VALUES 的性能会受到 commit 阶段的瓶颈限制。导入数据量较大时,可以打开 [group 
commit](../import-way/group-commit-manual.md) 达到更高的性能。
+
+## 基本原理
+
+在使用 INSERT INTO VALUES 时,需要通过 MySQL 协议发起导入作业给 FE 节点,FE 
会生成执行计划,执行计划中前部是查询相关的算子,最后一个是 OlapTableSink 算子,用于将查询结果写到目标表中。执行计划会被发送给 BE 
节点执行,Doris 会选定一个节点做为 Coordinator 节点,Coordinator 节点负责接受数据并分发数据到其他节点上。
+
+## 快速上手
+
+INSERT INTO VALUES 通过 MySQL 协议提交和传输。下例以 MySQL 命令行为例,演示通过 INSERT INTO VALUES 
提交导入作业。
+
+详细语法可以参见 [INSERT 
INTO](../../../sql-statements/data-modification/DML/INSERT.md)。
+
+### 前置检查
+
+INSERT INTO VALUES 需要对目标表的 INSERT 权限。如果没有 INSERT 权限,可以通过 
[GRANT](../../../sql-manual/sql-statements/Account-Management-Statements/GRANT) 
命令给用户授权。
+
+### 创建导入作业
+
+**INSERT INTO VALUES**
+
+1. 创建源表
+
+```sql
+CREATE TABLE testdb.test_table(
+    user_id            BIGINT       NOT NULL COMMENT "user id",
+    name               VARCHAR(20)           COMMENT "name",
+    age                INT                   COMMENT "age"
+)
+DUPLICATE KEY(user_id)
+DISTRIBUTED BY HASH(user_id) BUCKETS 10;
+```
+
+2. 使用 INSERT INTO VALUES 向源表导入数据(不推荐在生产环境中使用)
+
+```sql
+INSERT INTO testdb.test_table (user_id, name, age)
+VALUES (1, "Emily", 25),
+       (2, "Benjamin", 35),
+       (3, "Olivia", 28),
+       (4, "Alexander", 60),
+       (5, "Ava", 17);
+```
+
+INSERT INTO VALUES 是一种同步导入方式,导入结果会直接返回给用户。
+
+```JSON
+Query OK, 5 rows affected (0.308 sec)
+{'label':'label_26eebc33411f441c_b2b286730d495e2c', 'status':'VISIBLE', 
'txnId':'61071'}
+```
+
+3. 查看导入数据
+
+```sql
+MySQL> SELECT COUNT(*) FROM testdb.test_table;
++----------+
+| count(*) |
++----------+
+|        5 |
++----------+
+1 row in set (0.179 sec)
+```
+
+### 查看导入作业
+
+可以通过 show  load 命令查看已完成的 INSERT INTO VALUES 任务。
+
+```sql
+mysql> SHOW LOAD FROM testdb\G
+*************************** 1. row ***************************
+         JobId: 77172
+         Label: label_26eebc33411f441c_b2b286730d495e2c
+         State: FINISHED
+      Progress: Unknown id: 77172
+          Type: INSERT
+       EtlInfo: NULL
+      TaskInfo: cluster:N/A; timeout(s):14400; max_filter_ratio:0.0
+      ErrorMsg: NULL
+    CreateTime: 2024-11-20 16:44:08
+  EtlStartTime: 2024-11-20 16:44:08
+ EtlFinishTime: 2024-11-20 16:44:08
+ LoadStartTime: 2024-11-20 16:44:08
+LoadFinishTime: 2024-11-20 16:44:08
+           URL: 
+    JobDetails: {"Unfinished 
backends":{},"ScannedRows":0,"TaskNumber":0,"LoadBytes":0,"All 
backends":{},"FileNumber":0,"FileSize":0}
+ TransactionId: 61071
+  ErrorTablets: {}
+          User: root
+       Comment: 
+1 row in set (0.00 sec)
+```
+
+### 取消导入作业
+
+用户可以通过 Ctrl-C 取消当前正在执行的 INSERT INTO VALUES 作业。
+
+## 参考手册
+
+### 导入命令
+
+INSERT INTO VALUES 一般仅用于 Demo,不建议在生产环境使用。
+
+```sql
+INSERT INTO target_table (col1, col2, ...)
+VALUES (val1, val2, ...), (val3, val4, ...), ...;
+```
+
+### 导入配置参数
+
+**01 FE 配置**
+
+**insert_load_default_timeout_second**
+
+- 默认值:14400(4 小时)
+
+- 参数描述:导入任务的超时时间,单位:秒。导入任务在该超时时间内未完成则会被系统取消,变成 CANCELLED。
+
+**02 环境变量**
+
+**insert_timeout**
+
+- 默认值:14400(4 小时)
+
+- 参数描述:INSERT INTO VALUES 作为 SQL 语句的的超时时间,单位:秒。
+
+**enable_insert_strict**
+
+- 默认值:true
+
+- 参数描述:如果设置为 true,当 INSERT INTO VALUES 遇到不合格数据时导入会失败。如果设置为 false,INSERT INTO 
VALUES 会忽略不合格的行,只要有一条数据被正确导入,导入就会成功。
+
+- 解释:2.1.4 版本及以前,INSERT INTO VALUES 
无法控制错误率,只能通过该参数设置为严格检查数据质量或完全忽略错误数据。常见的数据不合格的原因有:源数据列长度超过目的数据列长度、列类型不匹配、分区不匹配、列顺序不匹配等。
+
+**insert_max_filter_ratio**
+
+- 默认值:1.0
+
+- 参数描述:自 2.1.5 版本。仅当 `enable_insert_strict` 值为 false 时生效。用于控制 `INSERT INTO 
VALUES` 时的错误容忍率。默认为 1.0 表示容忍所有错误。可以取值 0 ~ 1 之间的小数。表示当错误行数超过该比例后,INSERT 任务会失败。
+
+### 导入返回值
+
+INSERT INTO VALUES 是一个 SQL 语句,其返回结果会包含一个 JSON 字符串。
+
+其中的参数如下表说明:
+
+| 参数名称 | 说明                                                         |
+| -------- | ------------------------------------------------------------ |
+| Label    | 导入作业的 label,通过 INSERT INTO tbl WITH LABEL label ... 指定 |
+| Status   | 表示导入数据是否可见。如果可见,显示 `visible`,如果不可见,显示 `committed`<p>- 
`visible`:表示导入成功,数据可见</p> <p>- `committed`:该状态也表示导入已经完成,只是数据可能会延迟可见,无需重试</p> 
<p>- Label Already Exists:Label 重复,需要更换 label</p> <p>- Fail:导入失败</p> |
+| Err      | 导入错误信息                                                 |
+| TxnId    | 导入事务的 ID                                                |
+
+**INSERT 执行成功**
+
+```sql
+mysql> INSERT INTO test_table (user_id, name, age) VALUES (1, "Emily", 25), 
(2, "Benjamin", 35), (3, "Olivia", 28), (NULL, "Alexander", 60), (5, "Ava", 17);
+Query OK, 5 rows affected (0.05 sec)
+{'label':'label_26eebc33411f441c_b2b286730d495e2c', 'status':'VISIBLE', 
'txnId':'61071'}
+```
+
+其中 `Query OK` 表示执行成功。`5 rows affected` 表示总共有 5 行数据被导入。
+
+**INSERT 执行成功但是有 warning**
+
+```sql
+mysql> INSERT INTO test_table (user_id, name, age) VALUES (1, "Emily", 25), 
(2, "Benjamin", 35), (3, "Olivia", 28), (NULL, "Alexander", 60), (5, "Ava", 17);
+Query OK, 4 rows affected, 1 warning (0.04 sec)
+{'label':'label_a8d99ae931194d2b_93357aac59981a18', 'status':'VISIBLE', 
'txnId':'61068'}
+```
+
+其中 `Query OK` 表示执行成功。`4 rows affected` 表示总共有 4 行数据被导入。`1 warnings` 表示被过滤了 1 行。
+
+当需要查看被过滤的行时,用户可以通过 [SHOW 
LOAD](../../../sql-manual/sql-statements/data-modification/load-and-export/SHOW-LOAD.md)语句。返回结果中的
 URL 可以用于查询错误的数据,具体见后面 查看错误行 小结。
+
+```sql
+mysql> SHOW LOAD WHERE label="label_a8d99ae931194d2b_93357aac59981a18"\G
+*************************** 1. row ***************************
+         JobId: 77158
+         Label: label_a8d99ae931194d2b_93357aac59981a18
+         State: FINISHED
+      Progress: Unknown id: 77158
+          Type: INSERT
+       EtlInfo: NULL
+      TaskInfo: cluster:N/A; timeout(s):14400; max_filter_ratio:0.0
+      ErrorMsg: NULL
+    CreateTime: 2024-11-20 16:35:40
+  EtlStartTime: 2024-11-20 16:35:40
+ EtlFinishTime: 2024-11-20 16:35:40
+ LoadStartTime: 2024-11-20 16:35:40
+LoadFinishTime: 2024-11-20 16:35:40
+           URL: 
http://10.16.10.7:8743/api/_load_error_log?file=__shard_18/error_log_insert_stmt_a8d99ae931194d2b-93357aac59981a19_a8d99ae931194d2b_93357aac59981a19
+    JobDetails: {"Unfinished 
backends":{},"ScannedRows":0,"TaskNumber":0,"LoadBytes":0,"All 
backends":{},"FileNumber":0,"FileSize":0}
+ TransactionId: 61068
+  ErrorTablets: {}
+          User: root
+       Comment: 
+1 row in set (0.00 sec)
+```
+
+**INSERT 执行成功但是 status 是 committed**
+
+```sql
+mysql> INSERT INTO test_table (user_id, name, age) VALUES (1, "Emily", 25), 
(2, "Benjamin", 35), (3, "Olivia", 28), (4, "Alexander", 60), (5, "Ava", 17);
+Query OK, 5 rows affected (0.04 sec)
+{'label':'label_78bf5396d9594d4d_a8d9a914af40f73d', 'status':'COMMITTED', 
'txnId':'61074'}
+```
+
+数据不可见是一个临时状态,这批数据最终是一定可见的
+
+可以通过 [SHOW 
TRANSACTION](../../../sql-manual/sql-statements/transaction/SHOW-TRANSACTION.md)
 语句查看这批数据的可见状态。
+当返回结果中的 `TransactionStatus` 列变成 `VISIBLE` 时代表数据可见。
+
+```sql
+mysql> SHOW TRANSACTION WHERE id=61074\G
+*************************** 1. row ***************************
+     TransactionId: 61074
+             Label: label_78bf5396d9594d4d_a8d9a914af40f73d
+       Coordinator: FE: 10.16.10.7
+ TransactionStatus: VISIBLE
+ LoadJobSourceType: INSERT_STREAMING
+       PrepareTime: 2024-11-20 16:51:54
+     PreCommitTime: NULL
+        CommitTime: 2024-11-20 16:51:54
+       PublishTime: 2024-11-20 16:51:54
+        FinishTime: 2024-11-20 16:51:54
+            Reason: 
+ErrorReplicasCount: 0
+        ListenerId: -1
+         TimeoutMs: 14400000
+            ErrMsg: 
+1 row in set (0.00 sec)
+```
+
+**INSERT 执行失败**
+
+执行失败表示没有任何数据被成功导入,并返回如下:
+
+```sql
+mysql> INSERT INTO test_table (user_id, name, age) VALUES (1, "Emily", 25), 
(2, "Benjamin", 35), (3, "Olivia", 28), (NULL, "Alexander", 60), (5, "Ava", 17);
+ERROR 1105 (HY000): errCode = 2, detailMessage = Insert has too many filtered 
data 1/5 insert_max_filter_ratio is 0.100000. url: 
http://10.16.10.7:8747/api/_load_error_log?file=__shard_22/error_log_insert_stmt_5fafe6663e1a45e0-a666c1722ffc8c55_5fafe6663e1a45e0_a666c1722ffc8c55
+```
+
+其中 `ERROR 1105 (HY000): errCode = 2, detailMessage = Insert has too many 
filtered data 1/5 insert_max_filter_ratio is 0.100000.` 显示失败原因。后面的 url 
可以用于查询错误的数据,具体见后面 查看错误行 小结。
+
+## 导入最佳实践
+
+### 数据量
+
+INSERT INTO VALUES 通常用于测试和演示,不建议用于导入大量数据的场景。
+
+### 查看错误行
+
+当 INSERT INTO 返回结果中提供了 url 字段时,可以通过以下命令查看错误行:
+
+```sql
+SHOW LOAD WARNINGS ON "url";
+```
+
+示例:
+
+```sql
+mysql> SHOW LOAD WARNINGS ON 
"http://10.16.10.7:8743/api/_load_error_log?file=__shard_18/error_log_insert_stmt_a8d99ae931194d2b-93357aac59981a19_a8d99ae931194d2b_93357aac59981a19"\G
+*************************** 1. row ***************************
+         JobId: -1
+         Label: NULL
+ErrorMsgDetail: Reason: column_name[user_id], null value for not null column, 
type=BIGINT. src line []; 
+1 row in set (0.00 sec)
+```
+
+常见的错误的原因有:源数据列长度超过目的数据列长度、列类型不匹配、分区不匹配、列顺序不匹配等。
+
+可以通过环境变量 `enable_insert_strict`来控制 INSERT INTO 是否忽略错误行。
+
+## 更多帮助
+
+关于 Insert Into 使用的更多详细语法,请参阅 [INSERT 
INTO](../../../sql-manual/sql-statements/data-modification/DML/INSERT.md) 
命令手册,也可以在 MySQL 客户端命令行下输入 `HELP INSERT` 获取更多帮助信息。
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/data-operate/import/import-way/insert-into-manual.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/data-operate/import/import-way/insert-into-manual.md
index c172ba6088f..0df15976a70 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/data-operate/import/import-way/insert-into-manual.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/data-operate/import/import-way/insert-into-manual.md
@@ -27,21 +27,15 @@ under the License.
 
 INSERT INTO 支持将 Doris 查询的结果导入到另一个表中。INSERT INTO 
是一个同步导入方式,执行导入后返回导入结果。可以通过请求的返回判断导入是否成功。INSERT INTO 
可以保证导入任务的原子性,要么全部导入成功,要么全部导入失败。
 
-主要的 Insert Into 命令包含以下两种:
-
 - INSERT INTO tbl SELECT ...
 
-- INSERT INTO tbl (col1, col2, ...) VALUES (1, 2, ...), (1,3, ...)
-
 ## 使用场景
 
-1. 用户希望仅导入几条假数据,验证一下 Doris 系统的功能。此时适合使用 INSERT INTO VALUES 的语法,语法和 MySQL 
一样。不建议在生产环境中使用 INSERT INTO VALUES。
-
-2. 用户希望将已经在 Doris 表中的数据进行 ETL 转换并导入到一个新的 Doris 表中,此时适合使用 INSERT INTO SELECT 语法。
+1. 用户希望将已经在 Doris 表中的数据进行 ETL 转换并导入到一个新的 Doris 表中,此时适合使用 INSERT INTO SELECT 语法。
 
-3. 与 Multi-Catalog 外部表机制进行配合,如通过 Multi-Catalog 映射 MySQL 或者 Hive 系统中的表,然后通过 
INSERT INTO SELECT 语法将外部表中的数据导入到 Doris 表中存储。
+2. 与 Multi-Catalog 外部表机制进行配合,如通过 Multi-Catalog 映射 MySQL 或者 Hive 系统中的表,然后通过 
INSERT INTO SELECT 语法将外部表中的数据导入到 Doris 表中存储。
 
-4. 通过 Table Value Function(TVF)功能,可以直接将对象存储或 HDFS 上的文件作为 Table 
进行查询,并且支持自动的列类型推断。然后,通过 INSERT INTO SELECT 语法将外部表中的数据导入到 Doris 表中存储。
+3. 通过 Table Value Function(TVF)功能,可以直接将对象存储或 HDFS 上的文件作为 Table 
进行查询,并且支持自动的列类型推断。然后,通过 INSERT INTO SELECT 语法将外部表中的数据导入到 Doris 表中存储。
 
 ## 基本原理
 
@@ -59,8 +53,6 @@ INSERT INTO 需要对目标表的 INSERT 权限。如果没有 INSERT 权限,
 
 ### 创建导入作业
 
-**INSERT INTO VALUES**
-
 1. 创建源表
 
 ```sql
@@ -73,7 +65,7 @@ DUPLICATE KEY(user_id)
 DISTRIBUTED BY HASH(user_id) BUCKETS 10;
 ```
 
-2. 使用 INSERT INTO VALUES 向源表导入数据(不推荐在生产环境中使用)
+2. 使用任何方式向源表导入数据(这里以 INSERT INTO VALUES 为例)
 
 ```sql
 INSERT INTO testdb.test_table (user_id, name, age)
@@ -84,34 +76,13 @@ VALUES (1, "Emily", 25),
        (5, "Ava", 17);
 ```
 
-INSERT INTO 是一种同步导入方式,导入结果会直接返回给用户。可以打开 [group 
commit](../import-way/group-commit-manual.md) 达到更高的性能。
-
-```JSON
-Query OK, 5 rows affected (0.308 sec)
-{'label':'label_3e52da787aab4222_9126d2fce8f6d1e5', 'status':'VISIBLE', 
'txnId':'9081'}
-```
-
-3. 查看导入数据
-
-```sql
-MySQL> SELECT COUNT(*) FROM testdb.test_table;
-+----------+
-| count(*) |
-+----------+
-|        5 |
-+----------+
-1 row in set (0.179 sec)
-```
-
-**INSERT INTO SELECT**
-
-1. 在上述操作的基础上,创建一个新表作为目标表(其 schema 与源表相同)
+3. 在上述操作的基础上,创建一个新表作为目标表(其 schema 与源表相同)
 
 ```sql
 CREATE TABLE testdb.test_table2 LIKE testdb.test_table;
 ```
 
-2. 使用 INSERT INTO SELECT 导入到新表
+4. 使用 INSERT INTO SELECT 导入到新表
 
 ```sql
 INSERT INTO testdb.test_table2
@@ -120,21 +91,23 @@ Query OK, 3 rows affected (0.544 sec)
 {'label':'label_9c2bae970023407d_b2c5b78b368e78a7', 'status':'VISIBLE', 
'txnId':'9084'}
 ```
 
-3. 查看导入数据
+5. 查看导入数据
 
 ```sql
-MySQL> SELECT COUNT(*) FROM testdb.test_table2;
-+----------+
-| count(*) |
-+----------+
-|        3 |
-+----------+
-1 row in set (0.071 sec)
+MySQL> SELECT * FROM testdb.test_table2 ORDER BY age;
++---------+--------+------+
+| user_id | name   | age  |
++---------+--------+------+
+|       5 | Ava    |   17 |
+|       1 | Emily  |   25 |
+|       3 | Olivia |   28 |
++---------+--------+------+
+3 rows in set (0.02 sec)
 ```
 
-4. 可以使用 [JOB](../../scheduler/job-scheduler.md) 异步执行 INSERT。
+6. 可以使用 [JOB](../../scheduler/job-scheduler.md) 异步执行 INSERT。
 
-5. 数据源可以是 [tvf](../../../lakehouse/file.md) 或者 
[catalog](../../../lakehouse/database) 中的表。
+7. 数据源可以是 [tvf](../../../lakehouse/file.md) 或者 
[catalog](../../../lakehouse/database) 中的表。
 
 ### 查看导入作业
 
@@ -158,10 +131,6 @@ MySQL> SHOW LOAD FROM testdb;
 
 ### 导入命令
 
-INSERT INTO 导入语法如下:
-
-1. INSERT INTO SELECT
-
 INSERT INTO SELECT 一般用于将查询结果保存到目标表中。
 
 ```sql
@@ -170,15 +139,6 @@ INSERT INTO target_table SELECT ... FROM source_table;
 
 其中 SELECT 语句同一般的 SELECT 查询语句,可以包含 WHERE JOIN 等操作。
 
-2. INSERT INTO VALUES
-
-INSERT INTO VALUES 一般仅用于 Demo,不建议在生产环境使用。
-
-```sql
-INSERT INTO target_table (col1, col2, ...)
-VALUES (val1, val2, ...), (val3, val4, ...), ...;
-```
-
 ### 导入配置参数
 
 **01 FE 配置**
@@ -203,7 +163,7 @@ VALUES (val1, val2, ...), (val3, val4, ...), ...;
 
 - 参数描述:如果设置为 true,当 INSERT INTO 遇到不合格数据时导入会失败。如果设置为 false,INSERT INTO 
会忽略不合格的行,只要有一条数据被正确导入,导入就会成功。
 
-- 解释:INSERT INTO 
无法控制错误率,只能通过该参数设置为严格检查数据质量或完全忽略错误数据。常见的数据不合格的原因有:源数据列长度超过目的数据列长度、列类型不匹配、分区不匹配、列顺序不匹配等。
+- 解释:在 2.1.4 及以前的版本中。INSERT INTO 
无法控制错误率,只能通过该参数设置为严格检查数据质量或完全忽略错误数据。常见的数据不合格的原因有:源数据列长度超过目的数据列长度、列类型不匹配、分区不匹配、列顺序不匹配等。
 
 **insert_max_filter_ratio**
 
@@ -211,10 +171,6 @@ VALUES (val1, val2, ...), (val3, val4, ...), ...;
 
 - 参数描述:自 2.1.5 版本。仅当 `enable_insert_strict` 值为 false 时生效。用于控制当使用 `INSERT INTO 
FROM S3/HDFS/LOCAL()` 时,设定错误容忍率的。默认为 1.0 表示容忍所有错误。可以取值 0 ~ 1 
之间的小数。表示当错误行数超过该比例后,INSERT 任务会失败。
 
-**enable_nereids_dml_with_pipeline**
-
-  设置为 `true` 后,`insert into` 语句将尝试通过 Pipeline 引擎执行。详见[导入](../load-manual)文档。
-
 ### 导入返回值
 
 INSERT INTO 是一个 SQL 语句,其返回结果会根据查询结果的不同,分为以下几种:
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/data-operate/import/import-way/insert-into-values-manual.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/data-operate/import/import-way/insert-into-values-manual.md
new file mode 100644
index 00000000000..045cd166241
--- /dev/null
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/data-operate/import/import-way/insert-into-values-manual.md
@@ -0,0 +1,309 @@
+---
+{
+    "title": "Insert Into Values",
+    "language": "zh-CN"
+}
+---
+
+<!-- 
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+INSERT INTO VALUES 支持将 Doris 查询的结果导入到另一个表中。INSERT INTO VALUES 
是一个同步导入方式,执行导入后返回导入结果。可以通过请求的返回判断导入是否成功。INSERT INTO VALUES 
可以保证导入任务的原子性,要么全部导入成功,要么全部导入失败。
+
+- INSERT INTO tbl (col1, col2, ...) VALUES (1, 2, ...), (1,3, ...)
+
+## 使用场景
+
+1. 用户希望仅导入几条假数据,验证一下 Doris 系统的功能。此时适合使用 INSERT INTO VALUES 的语法,语法和 MySQL 一样。
+2. 并发的 INSERT INTO VALUES 的性能会受到 commit 阶段的瓶颈限制。导入数据量较大时,可以打开 [group 
commit](../import-way/group-commit-manual.md) 达到更高的性能。
+
+## 基本原理
+
+在使用 INSERT INTO VALUES 时,需要通过 MySQL 协议发起导入作业给 FE 节点,FE 
会生成执行计划,执行计划中前部是查询相关的算子,最后一个是 OlapTableSink 算子,用于将查询结果写到目标表中。执行计划会被发送给 BE 
节点执行,Doris 会选定一个节点做为 Coordinator 节点,Coordinator 节点负责接受数据并分发数据到其他节点上。
+
+## 快速上手
+
+INSERT INTO VALUES 通过 MySQL 协议提交和传输。下例以 MySQL 命令行为例,演示通过 INSERT INTO VALUES 
提交导入作业。
+
+详细语法可以参见 [INSERT 
INTO](../../../sql-manual/sql-statements/data-modification/DML/INSERT.md)。
+
+### 前置检查
+
+INSERT INTO VALUES 需要对目标表的 INSERT 权限。如果没有 INSERT 权限,可以通过 
[GRANT](../../../sql-manual/sql-statements/account-management/GRANT-TO.md) 
命令给用户授权。
+
+### 创建导入作业
+
+**INSERT INTO VALUES**
+
+1. 创建源表
+
+```sql
+CREATE TABLE testdb.test_table(
+    user_id            BIGINT       NOT NULL COMMENT "user id",
+    name               VARCHAR(20)           COMMENT "name",
+    age                INT                   COMMENT "age"
+)
+DUPLICATE KEY(user_id)
+DISTRIBUTED BY HASH(user_id) BUCKETS 10;
+```
+
+2. 使用 INSERT INTO VALUES 向源表导入数据(不推荐在生产环境中使用)
+
+```sql
+INSERT INTO testdb.test_table (user_id, name, age)
+VALUES (1, "Emily", 25),
+       (2, "Benjamin", 35),
+       (3, "Olivia", 28),
+       (4, "Alexander", 60),
+       (5, "Ava", 17);
+```
+
+INSERT INTO VALUES 是一种同步导入方式,导入结果会直接返回给用户。
+
+```JSON
+Query OK, 5 rows affected (0.308 sec)
+{'label':'label_26eebc33411f441c_b2b286730d495e2c', 'status':'VISIBLE', 
'txnId':'61071'}
+```
+
+3. 查看导入数据
+
+```sql
+MySQL> SELECT COUNT(*) FROM testdb.test_table;
++----------+
+| count(*) |
++----------+
+|        5 |
++----------+
+1 row in set (0.179 sec)
+```
+
+### 查看导入作业
+
+可以通过 show  load 命令查看已完成的 INSERT INTO VALUES 任务。
+
+```sql
+mysql> SHOW LOAD FROM testdb\G
+*************************** 1. row ***************************
+         JobId: 77172
+         Label: label_26eebc33411f441c_b2b286730d495e2c
+         State: FINISHED
+      Progress: Unknown id: 77172
+          Type: INSERT
+       EtlInfo: NULL
+      TaskInfo: cluster:N/A; timeout(s):14400; max_filter_ratio:0.0
+      ErrorMsg: NULL
+    CreateTime: 2024-11-20 16:44:08
+  EtlStartTime: 2024-11-20 16:44:08
+ EtlFinishTime: 2024-11-20 16:44:08
+ LoadStartTime: 2024-11-20 16:44:08
+LoadFinishTime: 2024-11-20 16:44:08
+           URL: 
+    JobDetails: {"Unfinished 
backends":{},"ScannedRows":0,"TaskNumber":0,"LoadBytes":0,"All 
backends":{},"FileNumber":0,"FileSize":0}
+ TransactionId: 61071
+  ErrorTablets: {}
+          User: root
+       Comment: 
+1 row in set (0.00 sec)
+```
+
+### 取消导入作业
+
+用户可以通过 Ctrl-C 取消当前正在执行的 INSERT INTO VALUES 作业。
+
+## 参考手册
+
+### 导入命令
+
+INSERT INTO VALUES 一般仅用于 Demo,不建议在生产环境使用。
+
+```sql
+INSERT INTO target_table (col1, col2, ...)
+VALUES (val1, val2, ...), (val3, val4, ...), ...;
+```
+
+### 导入配置参数
+
+**01 FE 配置**
+
+**insert_load_default_timeout_second**
+
+- 默认值:14400(4 小时)
+
+- 参数描述:导入任务的超时时间,单位:秒。导入任务在该超时时间内未完成则会被系统取消,变成 CANCELLED。
+
+**02 环境变量**
+
+**insert_timeout**
+
+- 默认值:14400(4 小时)
+
+- 参数描述:INSERT INTO VALUES 作为 SQL 语句的的超时时间,单位:秒。
+
+**enable_insert_strict**
+
+- 默认值:true
+
+- 参数描述:如果设置为 true,当 INSERT INTO VALUES 遇到不合格数据时导入会失败。如果设置为 false,INSERT INTO 
VALUES 会忽略不合格的行,只要有一条数据被正确导入,导入就会成功。
+
+- 解释:2.1.4 版本及以前,INSERT INTO VALUES 
无法控制错误率,只能通过该参数设置为严格检查数据质量或完全忽略错误数据。常见的数据不合格的原因有:源数据列长度超过目的数据列长度、列类型不匹配、分区不匹配、列顺序不匹配等。
+
+**insert_max_filter_ratio**
+
+- 默认值:1.0
+
+- 参数描述:自 2.1.5 版本。仅当 `enable_insert_strict` 值为 false 时生效。用于控制 `INSERT INTO 
VALUES` 时的错误容忍率。默认为 1.0 表示容忍所有错误。可以取值 0 ~ 1 之间的小数。表示当错误行数超过该比例后,INSERT 任务会失败。
+
+### 导入返回值
+
+INSERT INTO VALUES 是一个 SQL 语句,其返回结果会包含一个 JSON 字符串。
+
+其中的参数如下表说明:
+
+| 参数名称 | 说明                                                         |
+| -------- | ------------------------------------------------------------ |
+| Label    | 导入作业的 label,通过 INSERT INTO tbl WITH LABEL label ... 指定 |
+| Status   | 表示导入数据是否可见。如果可见,显示 `visible`,如果不可见,显示 `committed`<p>- 
`visible`:表示导入成功,数据可见</p> <p>- `committed`:该状态也表示导入已经完成,只是数据可能会延迟可见,无需重试</p> 
<p>- Label Already Exists:Label 重复,需要更换 label</p> <p>- Fail:导入失败</p> |
+| Err      | 导入错误信息                                                 |
+| TxnId    | 导入事务的 ID                                                |
+
+**INSERT 执行成功**
+
+```sql
+mysql> INSERT INTO test_table (user_id, name, age) VALUES (1, "Emily", 25), 
(2, "Benjamin", 35), (3, "Olivia", 28), (NULL, "Alexander", 60), (5, "Ava", 17);
+Query OK, 5 rows affected (0.05 sec)
+{'label':'label_26eebc33411f441c_b2b286730d495e2c', 'status':'VISIBLE', 
'txnId':'61071'}
+```
+
+其中 `Query OK` 表示执行成功。`5 rows affected` 表示总共有 5 行数据被导入。
+
+**INSERT 执行成功但是有 warning**
+
+```sql
+mysql> INSERT INTO test_table (user_id, name, age) VALUES (1, "Emily", 25), 
(2, "Benjamin", 35), (3, "Olivia", 28), (NULL, "Alexander", 60), (5, "Ava", 17);
+Query OK, 4 rows affected, 1 warning (0.04 sec)
+{'label':'label_a8d99ae931194d2b_93357aac59981a18', 'status':'VISIBLE', 
'txnId':'61068'}
+```
+
+其中 `Query OK` 表示执行成功。`4 rows affected` 表示总共有 4 行数据被导入。`1 warnings` 表示被过滤了 1 行。
+
+当需要查看被过滤的行时,用户可以通过 [SHOW 
LOAD](../../../sql-manual/sql-statements/data-modification/load-and-export/SHOW-LOAD.md)语句。返回结果中的
 URL 可以用于查询错误的数据,具体见后面 查看错误行 小结。
+
+```sql
+mysql> SHOW LOAD WHERE label="label_a8d99ae931194d2b_93357aac59981a18"\G
+*************************** 1. row ***************************
+         JobId: 77158
+         Label: label_a8d99ae931194d2b_93357aac59981a18
+         State: FINISHED
+      Progress: Unknown id: 77158
+          Type: INSERT
+       EtlInfo: NULL
+      TaskInfo: cluster:N/A; timeout(s):14400; max_filter_ratio:0.0
+      ErrorMsg: NULL
+    CreateTime: 2024-11-20 16:35:40
+  EtlStartTime: 2024-11-20 16:35:40
+ EtlFinishTime: 2024-11-20 16:35:40
+ LoadStartTime: 2024-11-20 16:35:40
+LoadFinishTime: 2024-11-20 16:35:40
+           URL: 
http://10.16.10.7:8743/api/_load_error_log?file=__shard_18/error_log_insert_stmt_a8d99ae931194d2b-93357aac59981a19_a8d99ae931194d2b_93357aac59981a19
+    JobDetails: {"Unfinished 
backends":{},"ScannedRows":0,"TaskNumber":0,"LoadBytes":0,"All 
backends":{},"FileNumber":0,"FileSize":0}
+ TransactionId: 61068
+  ErrorTablets: {}
+          User: root
+       Comment: 
+1 row in set (0.00 sec)
+```
+
+**INSERT 执行成功但是 status 是 committed**
+
+```sql
+mysql> INSERT INTO test_table (user_id, name, age) VALUES (1, "Emily", 25), 
(2, "Benjamin", 35), (3, "Olivia", 28), (4, "Alexander", 60), (5, "Ava", 17);
+Query OK, 5 rows affected (0.04 sec)
+{'label':'label_78bf5396d9594d4d_a8d9a914af40f73d', 'status':'COMMITTED', 
'txnId':'61074'}
+```
+
+数据不可见是一个临时状态,这批数据最终是一定可见的
+
+可以通过 [SHOW 
TRANSACTION](../../../sql-manual/sql-statements/transaction/SHOW-TRANSACTION.md)
 语句查看这批数据的可见状态。
+当返回结果中的 `TransactionStatus` 列变成 `VISIBLE` 时代表数据可见。
+
+```sql
+mysql> SHOW TRANSACTION WHERE id=61074\G
+*************************** 1. row ***************************
+     TransactionId: 61074
+             Label: label_78bf5396d9594d4d_a8d9a914af40f73d
+       Coordinator: FE: 10.16.10.7
+ TransactionStatus: VISIBLE
+ LoadJobSourceType: INSERT_STREAMING
+       PrepareTime: 2024-11-20 16:51:54
+     PreCommitTime: NULL
+        CommitTime: 2024-11-20 16:51:54
+       PublishTime: 2024-11-20 16:51:54
+        FinishTime: 2024-11-20 16:51:54
+            Reason: 
+ErrorReplicasCount: 0
+        ListenerId: -1
+         TimeoutMs: 14400000
+            ErrMsg: 
+1 row in set (0.00 sec)
+```
+
+**INSERT 执行失败**
+
+执行失败表示没有任何数据被成功导入,并返回如下:
+
+```sql
+mysql> INSERT INTO test_table (user_id, name, age) VALUES (1, "Emily", 25), 
(2, "Benjamin", 35), (3, "Olivia", 28), (NULL, "Alexander", 60), (5, "Ava", 17);
+ERROR 1105 (HY000): errCode = 2, detailMessage = Insert has too many filtered 
data 1/5 insert_max_filter_ratio is 0.100000. url: 
http://10.16.10.7:8747/api/_load_error_log?file=__shard_22/error_log_insert_stmt_5fafe6663e1a45e0-a666c1722ffc8c55_5fafe6663e1a45e0_a666c1722ffc8c55
+```
+
+其中 `ERROR 1105 (HY000): errCode = 2, detailMessage = Insert has too many 
filtered data 1/5 insert_max_filter_ratio is 0.100000.` 显示失败原因。后面的 url 
可以用于查询错误的数据,具体见后面 查看错误行 小结。
+
+## 导入最佳实践
+
+### 数据量
+
+INSERT INTO VALUES 通常用于测试和演示,不建议用于导入大量数据的场景。
+
+### 查看错误行
+
+当 INSERT INTO 返回结果中提供了 url 字段时,可以通过以下命令查看错误行:
+
+```sql
+SHOW LOAD WARNINGS ON "url";
+```
+
+示例:
+
+```sql
+mysql> SHOW LOAD WARNINGS ON 
"http://10.16.10.7:8743/api/_load_error_log?file=__shard_18/error_log_insert_stmt_a8d99ae931194d2b-93357aac59981a19_a8d99ae931194d2b_93357aac59981a19"\G
+*************************** 1. row ***************************
+         JobId: -1
+         Label: NULL
+ErrorMsgDetail: Reason: column_name[user_id], null value for not null column, 
type=BIGINT. src line []; 
+1 row in set (0.00 sec)
+```
+
+常见的错误的原因有:源数据列长度超过目的数据列长度、列类型不匹配、分区不匹配、列顺序不匹配等。
+
+可以通过环境变量 `enable_insert_strict`来控制 INSERT INTO 是否忽略错误行。
+
+## 更多帮助
+
+关于 Insert Into 使用的更多详细语法,请参阅 [INSERT 
INTO](../../../sql-manual/sql-statements/data-modification/DML/INSERT.md) 
命令手册,也可以在 MySQL 客户端命令行下输入 `HELP INSERT` 获取更多帮助信息。
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/data-operate/import/import-way/insert-into-manual.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/data-operate/import/import-way/insert-into-manual.md
index 94e537daea5..0df15976a70 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/data-operate/import/import-way/insert-into-manual.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/data-operate/import/import-way/insert-into-manual.md
@@ -27,21 +27,15 @@ under the License.
 
 INSERT INTO 支持将 Doris 查询的结果导入到另一个表中。INSERT INTO 
是一个同步导入方式,执行导入后返回导入结果。可以通过请求的返回判断导入是否成功。INSERT INTO 
可以保证导入任务的原子性,要么全部导入成功,要么全部导入失败。
 
-主要的 Insert Into 命令包含以下两种:
-
 - INSERT INTO tbl SELECT ...
 
-- INSERT INTO tbl (col1, col2, ...) VALUES (1, 2, ...), (1,3, ...)
-
 ## 使用场景
 
-1. 用户希望仅导入几条假数据,验证一下 Doris 系统的功能。此时适合使用 INSERT INTO VALUES 的语法,语法和 MySQL 
一样。不建议在生产环境中使用 INSERT INTO VALUES。
-
-2. 用户希望将已经在 Doris 表中的数据进行 ETL 转换并导入到一个新的 Doris 表中,此时适合使用 INSERT INTO SELECT 语法。
+1. 用户希望将已经在 Doris 表中的数据进行 ETL 转换并导入到一个新的 Doris 表中,此时适合使用 INSERT INTO SELECT 语法。
 
-3. 与 Multi-Catalog 外部表机制进行配合,如通过 Multi-Catalog 映射 MySQL 或者 Hive 系统中的表,然后通过 
INSERT INTO SELECT 语法将外部表中的数据导入到 Doris 表中存储。
+2. 与 Multi-Catalog 外部表机制进行配合,如通过 Multi-Catalog 映射 MySQL 或者 Hive 系统中的表,然后通过 
INSERT INTO SELECT 语法将外部表中的数据导入到 Doris 表中存储。
 
-4. 通过 Table Value Function(TVF)功能,可以直接将对象存储或 HDFS 上的文件作为 Table 
进行查询,并且支持自动的列类型推断。然后,通过 INSERT INTO SELECT 语法将外部表中的数据导入到 Doris 表中存储。
+3. 通过 Table Value Function(TVF)功能,可以直接将对象存储或 HDFS 上的文件作为 Table 
进行查询,并且支持自动的列类型推断。然后,通过 INSERT INTO SELECT 语法将外部表中的数据导入到 Doris 表中存储。
 
 ## 基本原理
 
@@ -59,8 +53,6 @@ INSERT INTO 需要对目标表的 INSERT 权限。如果没有 INSERT 权限,
 
 ### 创建导入作业
 
-**INSERT INTO VALUES**
-
 1. 创建源表
 
 ```sql
@@ -73,7 +65,7 @@ DUPLICATE KEY(user_id)
 DISTRIBUTED BY HASH(user_id) BUCKETS 10;
 ```
 
-2. 使用 INSERT INTO VALUES 向源表导入数据(不推荐在生产环境中使用)
+2. 使用任何方式向源表导入数据(这里以 INSERT INTO VALUES 为例)
 
 ```sql
 INSERT INTO testdb.test_table (user_id, name, age)
@@ -84,34 +76,13 @@ VALUES (1, "Emily", 25),
        (5, "Ava", 17);
 ```
 
-INSERT INTO 是一种同步导入方式,导入结果会直接返回给用户。可以打开 [group 
commit](../import-way/group-commit-manual.md) 达到更高的性能。
-
-```JSON
-Query OK, 5 rows affected (0.308 sec)
-{'label':'label_3e52da787aab4222_9126d2fce8f6d1e5', 'status':'VISIBLE', 
'txnId':'9081'}
-```
-
-3. 查看导入数据
-
-```sql
-MySQL> SELECT COUNT(*) FROM testdb.test_table;
-+----------+
-| count(*) |
-+----------+
-|        5 |
-+----------+
-1 row in set (0.179 sec)
-```
-
-**INSERT INTO SELECT**
-
-1. 在上述操作的基础上,创建一个新表作为目标表(其 schema 与源表相同)
+3. 在上述操作的基础上,创建一个新表作为目标表(其 schema 与源表相同)
 
 ```sql
 CREATE TABLE testdb.test_table2 LIKE testdb.test_table;
 ```
 
-2. 使用 INSERT INTO SELECT 导入到新表
+4. 使用 INSERT INTO SELECT 导入到新表
 
 ```sql
 INSERT INTO testdb.test_table2
@@ -120,21 +91,23 @@ Query OK, 3 rows affected (0.544 sec)
 {'label':'label_9c2bae970023407d_b2c5b78b368e78a7', 'status':'VISIBLE', 
'txnId':'9084'}
 ```
 
-3. 查看导入数据
+5. 查看导入数据
 
 ```sql
-MySQL> SELECT COUNT(*) FROM testdb.test_table2;
-+----------+
-| count(*) |
-+----------+
-|        3 |
-+----------+
-1 row in set (0.071 sec)
+MySQL> SELECT * FROM testdb.test_table2 ORDER BY age;
++---------+--------+------+
+| user_id | name   | age  |
++---------+--------+------+
+|       5 | Ava    |   17 |
+|       1 | Emily  |   25 |
+|       3 | Olivia |   28 |
++---------+--------+------+
+3 rows in set (0.02 sec)
 ```
 
-4. 可以使用 [JOB](../../scheduler/job-scheduler.md) 异步执行 INSERT。
+6. 可以使用 [JOB](../../scheduler/job-scheduler.md) 异步执行 INSERT。
 
-5. 数据源可以是 [tvf](../../../lakehouse/file.md) 或者 
[catalog](../../../lakehouse/database/jdbc) 中的表。
+7. 数据源可以是 [tvf](../../../lakehouse/file.md) 或者 
[catalog](../../../lakehouse/database) 中的表。
 
 ### 查看导入作业
 
@@ -158,10 +131,6 @@ MySQL> SHOW LOAD FROM testdb;
 
 ### 导入命令
 
-INSERT INTO 导入语法如下:
-
-1. INSERT INTO SELECT
-
 INSERT INTO SELECT 一般用于将查询结果保存到目标表中。
 
 ```sql
@@ -170,15 +139,6 @@ INSERT INTO target_table SELECT ... FROM source_table;
 
 其中 SELECT 语句同一般的 SELECT 查询语句,可以包含 WHERE JOIN 等操作。
 
-2. INSERT INTO VALUES
-
-INSERT INTO VALUES 一般仅用于 Demo,不建议在生产环境使用。
-
-```sql
-INSERT INTO target_table (col1, col2, ...)
-VALUES (val1, val2, ...), (val3, val4, ...), ...;
-```
-
 ### 导入配置参数
 
 **01 FE 配置**
@@ -203,7 +163,7 @@ VALUES (val1, val2, ...), (val3, val4, ...), ...;
 
 - 参数描述:如果设置为 true,当 INSERT INTO 遇到不合格数据时导入会失败。如果设置为 false,INSERT INTO 
会忽略不合格的行,只要有一条数据被正确导入,导入就会成功。
 
-- 解释:INSERT INTO 
无法控制错误率,只能通过该参数设置为严格检查数据质量或完全忽略错误数据。常见的数据不合格的原因有:源数据列长度超过目的数据列长度、列类型不匹配、分区不匹配、列顺序不匹配等。
+- 解释:在 2.1.4 及以前的版本中。INSERT INTO 
无法控制错误率,只能通过该参数设置为严格检查数据质量或完全忽略错误数据。常见的数据不合格的原因有:源数据列长度超过目的数据列长度、列类型不匹配、分区不匹配、列顺序不匹配等。
 
 **insert_max_filter_ratio**
 
@@ -211,10 +171,6 @@ VALUES (val1, val2, ...), (val3, val4, ...), ...;
 
 - 参数描述:自 2.1.5 版本。仅当 `enable_insert_strict` 值为 false 时生效。用于控制当使用 `INSERT INTO 
FROM S3/HDFS/LOCAL()` 时,设定错误容忍率的。默认为 1.0 表示容忍所有错误。可以取值 0 ~ 1 
之间的小数。表示当错误行数超过该比例后,INSERT 任务会失败。
 
-**enable_nereids_dml_with_pipeline**
-
-  设置为 `true` 后,`insert into` 语句将尝试通过 Pipeline 引擎执行。详见[导入](../load-manual)文档。
-
 ### 导入返回值
 
 INSERT INTO 是一个 SQL 语句,其返回结果会根据查询结果的不同,分为以下几种:
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/data-operate/import/import-way/insert-into-values-manual.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/data-operate/import/import-way/insert-into-values-manual.md
new file mode 100644
index 00000000000..4e98e360492
--- /dev/null
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/data-operate/import/import-way/insert-into-values-manual.md
@@ -0,0 +1,309 @@
+---
+{
+    "title": "Insert Into Values",
+    "language": "zh-CN"
+}
+---
+
+<!-- 
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+INSERT INTO VALUES 支持将 Doris 查询的结果导入到另一个表中。INSERT INTO VALUES 
是一个同步导入方式,执行导入后返回导入结果。可以通过请求的返回判断导入是否成功。INSERT INTO VALUES 
可以保证导入任务的原子性,要么全部导入成功,要么全部导入失败。
+
+- INSERT INTO tbl (col1, col2, ...) VALUES (1, 2, ...), (1,3, ...)
+
+## 使用场景
+
+1. 用户希望仅导入几条假数据,验证一下 Doris 系统的功能。此时适合使用 INSERT INTO VALUES 的语法,语法和 MySQL 一样。
+2. 并发的 INSERT INTO VALUES 的性能会受到 commit 阶段的瓶颈限制。导入数据量较大时,可以打开 [group 
commit](../import-way/group-commit-manual.md) 达到更高的性能。
+
+## 基本原理
+
+在使用 INSERT INTO VALUES 时,需要通过 MySQL 协议发起导入作业给 FE 节点,FE 
会生成执行计划,执行计划中前部是查询相关的算子,最后一个是 OlapTableSink 算子,用于将查询结果写到目标表中。执行计划会被发送给 BE 
节点执行,Doris 会选定一个节点做为 Coordinator 节点,Coordinator 节点负责接受数据并分发数据到其他节点上。
+
+## 快速上手
+
+INSERT INTO VALUES 通过 MySQL 协议提交和传输。下例以 MySQL 命令行为例,演示通过 INSERT INTO VALUES 
提交导入作业。
+
+详细语法可以参见 [INSERT 
INTO](../../../sql-manual/sql-statements/data-modification/DML/INSERT.md)。
+
+### 前置检查
+
+INSERT INTO VALUES 需要对目标表的 INSERT 权限。如果没有 INSERT 权限,可以通过 
[GRANT](../../../sql-manual/sql-statements/account-management/GRANT-TO.md) 
命令给用户授权。
+
+### 创建导入作业
+
+**INSERT INTO VALUES**
+
+1. 创建源表
+
+```sql
+CREATE TABLE testdb.test_table(
+    user_id            BIGINT       NOT NULL COMMENT "user id",
+    name               VARCHAR(20)           COMMENT "name",
+    age                INT                   COMMENT "age"
+)
+DUPLICATE KEY(user_id)
+DISTRIBUTED BY HASH(user_id) BUCKETS 10;
+```
+
+2. 使用 INSERT INTO VALUES 向源表导入数据(不推荐在生产环境中使用)
+
+```sql
+INSERT INTO testdb.test_table (user_id, name, age)
+VALUES (1, "Emily", 25),
+       (2, "Benjamin", 35),
+       (3, "Olivia", 28),
+       (4, "Alexander", 60),
+       (5, "Ava", 17);
+```
+
+INSERT INTO VALUES 是一种同步导入方式,导入结果会直接返回给用户。
+
+```JSON
+Query OK, 5 rows affected (0.308 sec)
+{'label':'label_26eebc33411f441c_b2b286730d495e2c', 'status':'VISIBLE', 
'txnId':'61071'}
+```
+
+3. 查看导入数据
+
+```sql
+MySQL> SELECT COUNT(*) FROM testdb.test_table;
++----------+
+| count(*) |
++----------+
+|        5 |
++----------+
+1 row in set (0.179 sec)
+```
+
+### 查看导入作业
+
+可以通过 show  load 命令查看已完成的 INSERT INTO VALUES 任务。
+
+```sql
+mysql> SHOW LOAD FROM testdb\G
+*************************** 1. row ***************************
+         JobId: 77172
+         Label: label_26eebc33411f441c_b2b286730d495e2c
+         State: FINISHED
+      Progress: Unknown id: 77172
+          Type: INSERT
+       EtlInfo: NULL
+      TaskInfo: cluster:N/A; timeout(s):14400; max_filter_ratio:0.0
+      ErrorMsg: NULL
+    CreateTime: 2024-11-20 16:44:08
+  EtlStartTime: 2024-11-20 16:44:08
+ EtlFinishTime: 2024-11-20 16:44:08
+ LoadStartTime: 2024-11-20 16:44:08
+LoadFinishTime: 2024-11-20 16:44:08
+           URL: 
+    JobDetails: {"Unfinished 
backends":{},"ScannedRows":0,"TaskNumber":0,"LoadBytes":0,"All 
backends":{},"FileNumber":0,"FileSize":0}
+ TransactionId: 61071
+  ErrorTablets: {}
+          User: root
+       Comment: 
+1 row in set (0.00 sec)
+```
+
+### 取消导入作业
+
+用户可以通过 Ctrl-C 取消当前正在执行的 INSERT INTO VALUES 作业。
+
+## 参考手册
+
+### 导入命令
+
+INSERT INTO VALUES 一般仅用于 Demo,不建议在生产环境使用。
+
+```sql
+INSERT INTO target_table (col1, col2, ...)
+VALUES (val1, val2, ...), (val3, val4, ...), ...;
+```
+
+### 导入配置参数
+
+**01 FE 配置**
+
+**insert_load_default_timeout_second**
+
+- 默认值:14400(4 小时)
+
+- 参数描述:导入任务的超时时间,单位:秒。导入任务在该超时时间内未完成则会被系统取消,变成 CANCELLED。
+
+**02 环境变量**
+
+**insert_timeout**
+
+- 默认值:14400(4 小时)
+
+- 参数描述:INSERT INTO VALUES 作为 SQL 语句的的超时时间,单位:秒。
+
+**enable_insert_strict**
+
+- 默认值:true
+
+- 参数描述:如果设置为 true,当 INSERT INTO VALUES 遇到不合格数据时导入会失败。如果设置为 false,INSERT INTO 
VALUES 会忽略不合格的行,只要有一条数据被正确导入,导入就会成功。
+
+- 解释:2.1.4 版本及以前,INSERT INTO VALUES 
无法控制错误率,只能通过该参数设置为严格检查数据质量或完全忽略错误数据。常见的数据不合格的原因有:源数据列长度超过目的数据列长度、列类型不匹配、分区不匹配、列顺序不匹配等。
+
+**insert_max_filter_ratio**
+
+- 默认值:1.0
+
+- 参数描述:自 2.1.5 版本。仅当 `enable_insert_strict` 值为 false 时生效。用于控制 `INSERT INTO 
VALUES` 时的错误容忍率。默认为 1.0 表示容忍所有错误。可以取值 0 ~ 1 之间的小数。表示当错误行数超过该比例后,INSERT 任务会失败。
+
+### 导入返回值
+
+INSERT INTO VALUES 是一个 SQL 语句,其返回结果会包含一个 JSON 字符串。
+
+其中的参数如下表说明:
+
+| 参数名称 | 说明                                                         |
+| -------- | ------------------------------------------------------------ |
+| Label    | 导入作业的 label,通过 INSERT INTO tbl WITH LABEL label ... 指定 |
+| Status   | 表示导入数据是否可见。如果可见,显示 `visible`,如果不可见,显示 `committed`<p>- 
`visible`:表示导入成功,数据可见</p> <p>- `committed`:该状态也表示导入已经完成,只是数据可能会延迟可见,无需重试</p> 
<p>- Label Already Exists:Label 重复,需要更换 label</p> <p>- Fail:导入失败</p> |
+| Err      | 导入错误信息                                                 |
+| TxnId    | 导入事务的 ID                                                |
+
+**INSERT 执行成功**
+
+```sql
+mysql> INSERT INTO test_table (user_id, name, age) VALUES (1, "Emily", 25), 
(2, "Benjamin", 35), (3, "Olivia", 28), (NULL, "Alexander", 60), (5, "Ava", 17);
+Query OK, 5 rows affected (0.05 sec)
+{'label':'label_26eebc33411f441c_b2b286730d495e2c', 'status':'VISIBLE', 
'txnId':'61071'}
+```
+
+其中 `Query OK` 表示执行成功。`5 rows affected` 表示总共有 5 行数据被导入。
+
+**INSERT 执行成功但是有 warning**
+
+```sql
+mysql> INSERT INTO test_table (user_id, name, age) VALUES (1, "Emily", 25), 
(2, "Benjamin", 35), (3, "Olivia", 28), (NULL, "Alexander", 60), (5, "Ava", 17);
+Query OK, 4 rows affected, 1 warning (0.04 sec)
+{'label':'label_a8d99ae931194d2b_93357aac59981a18', 'status':'VISIBLE', 
'txnId':'61068'}
+```
+
+其中 `Query OK` 表示执行成功。`4 rows affected` 表示总共有 4 行数据被导入。`1 warnings` 表示被过滤了 1 行。
+
+当需要查看被过滤的行时,用户可以通过 [SHOW 
LOAD](../../../sql-manual/sql-statements/data-modification/load-and-export/SHOW-LOAD.md)语句。返回结果中的
 URL 可以用于查询错误的数据,具体见后面 查看错误行 小结。
+
+```sql
+mysql> SHOW LOAD WHERE label="label_a8d99ae931194d2b_93357aac59981a18"\G
+*************************** 1. row ***************************
+         JobId: 77158
+         Label: label_a8d99ae931194d2b_93357aac59981a18
+         State: FINISHED
+      Progress: Unknown id: 77158
+          Type: INSERT
+       EtlInfo: NULL
+      TaskInfo: cluster:N/A; timeout(s):14400; max_filter_ratio:0.0
+      ErrorMsg: NULL
+    CreateTime: 2024-11-20 16:35:40
+  EtlStartTime: 2024-11-20 16:35:40
+ EtlFinishTime: 2024-11-20 16:35:40
+ LoadStartTime: 2024-11-20 16:35:40
+LoadFinishTime: 2024-11-20 16:35:40
+           URL: 
http://10.16.10.7:8743/api/_load_error_log?file=__shard_18/error_log_insert_stmt_a8d99ae931194d2b-93357aac59981a19_a8d99ae931194d2b_93357aac59981a19
+    JobDetails: {"Unfinished 
backends":{},"ScannedRows":0,"TaskNumber":0,"LoadBytes":0,"All 
backends":{},"FileNumber":0,"FileSize":0}
+ TransactionId: 61068
+  ErrorTablets: {}
+          User: root
+       Comment: 
+1 row in set (0.00 sec)
+```
+
+**INSERT 执行成功但是 status 是 committed**
+
+```sql
+mysql> INSERT INTO test_table (user_id, name, age) VALUES (1, "Emily", 25), 
(2, "Benjamin", 35), (3, "Olivia", 28), (4, "Alexander", 60), (5, "Ava", 17);
+Query OK, 5 rows affected (0.04 sec)
+{'label':'label_78bf5396d9594d4d_a8d9a914af40f73d', 'status':'COMMITTED', 
'txnId':'61074'}
+```
+
+数据不可见是一个临时状态,这批数据最终是一定可见的
+
+可以通过 [SHOW 
TRANSACTION](../../../sql-manual/sql-statements/Show-Statements/SHOW-TRANSACTION)
 语句查看这批数据的可见状态。
+当返回结果中的 `TransactionStatus` 列变成 `VISIBLE` 时代表数据可见。
+
+```sql
+mysql> SHOW TRANSACTION WHERE id=61074\G
+*************************** 1. row ***************************
+     TransactionId: 61074
+             Label: label_78bf5396d9594d4d_a8d9a914af40f73d
+       Coordinator: FE: 10.16.10.7
+ TransactionStatus: VISIBLE
+ LoadJobSourceType: INSERT_STREAMING
+       PrepareTime: 2024-11-20 16:51:54
+     PreCommitTime: NULL
+        CommitTime: 2024-11-20 16:51:54
+       PublishTime: 2024-11-20 16:51:54
+        FinishTime: 2024-11-20 16:51:54
+            Reason: 
+ErrorReplicasCount: 0
+        ListenerId: -1
+         TimeoutMs: 14400000
+            ErrMsg: 
+1 row in set (0.00 sec)
+```
+
+**INSERT 执行失败**
+
+执行失败表示没有任何数据被成功导入,并返回如下:
+
+```sql
+mysql> INSERT INTO test_table (user_id, name, age) VALUES (1, "Emily", 25), 
(2, "Benjamin", 35), (3, "Olivia", 28), (NULL, "Alexander", 60), (5, "Ava", 17);
+ERROR 1105 (HY000): errCode = 2, detailMessage = Insert has too many filtered 
data 1/5 insert_max_filter_ratio is 0.100000. url: 
http://10.16.10.7:8747/api/_load_error_log?file=__shard_22/error_log_insert_stmt_5fafe6663e1a45e0-a666c1722ffc8c55_5fafe6663e1a45e0_a666c1722ffc8c55
+```
+
+其中 `ERROR 1105 (HY000): errCode = 2, detailMessage = Insert has too many 
filtered data 1/5 insert_max_filter_ratio is 0.100000.` 显示失败原因。后面的 url 
可以用于查询错误的数据,具体见后面 查看错误行 小结。
+
+## 导入最佳实践
+
+### 数据量
+
+INSERT INTO VALUES 通常用于测试和演示,不建议用于导入大量数据的场景。
+
+### 查看错误行
+
+当 INSERT INTO 返回结果中提供了 url 字段时,可以通过以下命令查看错误行:
+
+```sql
+SHOW LOAD WARNINGS ON "url";
+```
+
+示例:
+
+```sql
+mysql> SHOW LOAD WARNINGS ON 
"http://10.16.10.7:8743/api/_load_error_log?file=__shard_18/error_log_insert_stmt_a8d99ae931194d2b-93357aac59981a19_a8d99ae931194d2b_93357aac59981a19"\G
+*************************** 1. row ***************************
+         JobId: -1
+         Label: NULL
+ErrorMsgDetail: Reason: column_name[user_id], null value for not null column, 
type=BIGINT. src line []; 
+1 row in set (0.00 sec)
+```
+
+常见的错误的原因有:源数据列长度超过目的数据列长度、列类型不匹配、分区不匹配、列顺序不匹配等。
+
+可以通过环境变量 `enable_insert_strict`来控制 INSERT INTO 是否忽略错误行。
+
+## 更多帮助
+
+关于 Insert Into 使用的更多详细语法,请参阅 [INSERT 
INTO](../../../sql-manual/sql-statements/data-modification/DML/INSERT.md) 
命令手册,也可以在 MySQL 客户端命令行下输入 `HELP INSERT` 获取更多帮助信息。
diff --git a/sidebars.json b/sidebars.json
index 06bb42b6c21..b7c4a327333 100644
--- a/sidebars.json
+++ b/sidebars.json
@@ -167,6 +167,7 @@
                                         
"data-operate/import/import-way/broker-load-manual",
                                         
"data-operate/import/import-way/routine-load-manual",
                                         
"data-operate/import/import-way/insert-into-manual",
+                                        
"data-operate/import/import-way/insert-into-values-manual",
                                         
"data-operate/import/import-way/mysql-load-manual",
                                         
"data-operate/import/import-way/group-commit-manual"
                                     ]
diff --git 
a/versioned_docs/version-2.1/data-operate/import/import-way/insert-into-manual.md
 
b/versioned_docs/version-2.1/data-operate/import/import-way/insert-into-manual.md
index eaa02afddf2..a3869067dc8 100644
--- 
a/versioned_docs/version-2.1/data-operate/import/import-way/insert-into-manual.md
+++ 
b/versioned_docs/version-2.1/data-operate/import/import-way/insert-into-manual.md
@@ -26,14 +26,10 @@ under the License.
 
 The INSERT INTO statement supports importing the results of a Doris query into 
another table. INSERT INTO is a synchronous import method, where the import 
result is returned after the import is executed. Whether the import is 
successful can be determined based on the returned result. INSERT INTO ensures 
the atomicity of the import task, meaning that either all the data is imported 
successfully or none of it is imported.
 
-There are primarily two main forms of the INSERT INTO command:
-
 - INSERT INTO tbl SELECT...
-- INSERT INTO tbl (col1, col2, ...) VALUES (1, 2, ...), (1,3, ...)
 
 ## Applicable scenarios
 
-1. If a user wants to import only a few test data records to verify the 
functionality of the Doris system, the INSERT INTO VALUES syntax is applicable. 
It is similar to the MySQL syntax. However, it is not recommended to use INSERT 
INTO VALUES in a production environment.
 2. If a user wants to perform ETL on existing data in a Doris table and then 
import it into a new Doris table, the INSERT INTO SELECT syntax is applicable.
 3. In conjunction with the Multi-Catalog external table mechanism, tables from 
MySQL or Hive systems can be mapped via Multi-Catalog. Then, data from external 
tables can be imported into Doris tables using the INSERT INTO SELECT syntax.
 4. Utilizing the Table Value Functions (TVFs), users can directly query data 
stored in object storage or files on HDFS as tables, with automatic column type 
inference. Then, data from external tables can be imported into Doris tables 
using the INSERT INTO SELECT syntax.
@@ -54,8 +50,6 @@ INSERT INTO requires INSERT permissions on the target table. 
You can grant permi
 
 ### Create an INSERT INTO job
 
-**INSERT INTO VALUES**
-
 1. Create a source table
 
 ```SQL
@@ -68,7 +62,7 @@ DUPLICATE KEY(user_id)
 DISTRIBUTED BY HASH(user_id) BUCKETS 10;
 ```
 
-2. Import data into the source table using `INSERT INTO VALUES` (not 
recommended for production environments).
+2. Import data into the source table using any load method. (Here we use 
`INSERT INTO VALUES` for example).
 
 ```SQL
 INSERT INTO testdb.test_table (user_id, name, age)
@@ -79,34 +73,13 @@ VALUES (1, "Emily", 25),
        (5, "Ava", 17);
 ```
 
-INSERT INTO is a synchronous import method, where the import result is 
directly returned to the user. You can enable [group 
commit](../import-way/group-commit-manual.md) to achieve high performance. 
-
-```JSON
-Query OK, 5 rows affected (0.308 sec)
-{'label':'label_3e52da787aab4222_9126d2fce8f6d1e5', 'status':'VISIBLE', 
'txnId':'9081'}
-```
-
-3. View imported data.
-
-```SQL
-MySQL> SELECT COUNT(*) FROM testdb.test_table;
-+----------+
-| count(*) |
-+----------+
-|        5 |
-+----------+
-1 row in set (0.179 sec)
-```
-
-**INSERT INTO SELECT**
-
-1. Building upon the above operations, create a new table as the target table 
(with the same schema as the source table).
+3. Building upon the above operations, create a new table as the target table 
(with the same schema as the source table).
 
 ```SQL
 CREATE TABLE testdb.test_table2 LIKE testdb.test_table;
 ```
 
-2. Ingest data into the new table using `INSERT INTO SELECT`.
+4. Ingest data into the new table using `INSERT INTO SELECT`.
 
 ```SQL
 INSERT INTO testdb.test_table2
@@ -115,21 +88,23 @@ Query OK, 3 rows affected (0.544 sec)
 {'label':'label_9c2bae970023407d_b2c5b78b368e78a7', 'status':'VISIBLE', 
'txnId':'9084'}
 ```
 
-3. View imported data.
+5. View imported data.
 
 ```SQL
-MySQL> SELECT COUNT(*) FROM testdb.test_table2;
-+----------+
-| count(*) |
-+----------+
-|        3 |
-+----------+
-1 row in set (0.071 sec)
+MySQL> SELECT * FROM testdb.test_table2 ORDER BY age;
++---------+--------+------+
+| user_id | name   | age  |
++---------+--------+------+
+|       5 | Ava    |   17 |
+|       1 | Emily  |   25 |
+|       3 | Olivia |   28 |
++---------+--------+------+
+3 rows in set (0.02 sec)
 ```
 
-4. You can use [JOB](../../scheduler/job-scheduler.md) make the INSERT 
operation execute asynchronously.
+6. You can use [JOB](../../scheduler/job-scheduler.md) make the INSERT 
operation execute asynchronously.
 
-5. Sources can be [tvf](../../../lakehouse/file.md) or tables in a 
[catalog](../../../lakehouse/database).
+7. Sources can be [tvf](../../../lakehouse/file.md) or tables in a 
[catalog](../../../lakehouse/database).
 
 ### View INSERT INTO jobs
 
@@ -165,15 +140,6 @@ INSERT INTO target_table SELECT ... FROM source_table;
 
 The SELECT statement above is similar to a regular SELECT query, allowing 
operations such as WHERE and JOIN.
 
-2. INSERT INTO VALUES
-
-INSERT INTO VALUES is typically used for testing purposes. It is not 
recommended for production environments.
-
-```SQL
-INSERT INTO target_table (col1, col2, ...)
-VALUES (val1, val2, ...), (val3, val4, ...), ...;
-```
-
 ### Parameter configuration
 
 **FE** **configuration**
@@ -194,7 +160,7 @@ enable_insert_strict
 
 - Default value: true
 - Description: If this is set to true, INSERT INTO will fail when the task 
involves invalid data. If set to false, INSERT INTO will ignore invalid rows, 
and the import will be considered successful as long as at least one row is 
imported successfully.
-- Explanation: INSERT INTO cannot control the error rate, so this parameter is 
used to either strictly check data quality or completely ignore invalid data. 
Common reasons for data invalidity include: source data column length exceeding 
destination column length, column type mismatch, partition mismatch, and column 
order mismatch.
+- Explanation: Until version 2.1.4. INSERT INTO cannot control the error rate, 
so this parameter is used to either strictly check data quality or completely 
ignore invalid data. Common reasons for data invalidity include: source data 
column length exceeding destination column length, column type mismatch, 
partition mismatch, and column order mismatch.
 
 insert_max_filter_ratio
 
diff --git 
a/versioned_docs/version-2.1/data-operate/import/import-way/insert-into-values-manual.md
 
b/versioned_docs/version-2.1/data-operate/import/import-way/insert-into-values-manual.md
new file mode 100644
index 00000000000..7e43d6ae427
--- /dev/null
+++ 
b/versioned_docs/version-2.1/data-operate/import/import-way/insert-into-values-manual.md
@@ -0,0 +1,305 @@
+---
+{
+    "title": "Insert Into Values",
+    "language": "en"
+}
+---
+
+<!-- 
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+The INSERT INTO VALUES statement supports importing the results of a Doris 
query into another table. INSERT INTO VALUES is a synchronous import method, 
where the import result is returned after the import is executed. Whether the 
import is successful can be determined based on the returned result. INSERT 
INTO VALUES ensures the atomicity of the import task, meaning that either all 
the data is imported successfully or none of it is imported.
+
+- INSERT INTO tbl (col1, col2, ...) VALUES (1, 2, ...), (1,3, ...)
+
+## Applicable scenarios
+
+1. If a user wants to import only a few test data records to verify the 
functionality of the Doris system, the INSERT INTO VALUES syntax is applicable. 
It is similar to the MySQL syntax. However, it is not recommended to use INSERT 
INTO VALUES in a production environment.
+2. The performance of concurrent INSERT INTO VALUES jobs will be bottlenecked 
by commit stage. When loading large quantity of data, you can enable [group 
commit](../import-way/group-commit-manual.md) to achieve high performance. 
+
+## Implementation
+
+When using INSERT INTO VALUES, the import job needs to be initiated and 
submitted to the FE node using the MySQL protocol. The FE generates an 
execution plan, which includes query-related operators, with the last operator 
being the OlapTableSink. The OlapTableSink operator is responsible for writing 
the query result to the target table. The execution plan is then sent to the BE 
nodes for execution. Doris designates one BE node as the Coordinator, which 
receives and distributes the data t [...]
+
+## Get started
+
+An INSERT INTO VALUES job is submitted and transmitted using the MySQL 
protocol. The following example demonstrates submitting an import job using 
INSERT INTO VALUES through the MySQL command-line interface.
+
+### Preparation
+
+INSERT INTO VALUES requires INSERT permissions on the target table. You can 
grant permissions to user accounts using the GRANT command.
+
+### Create an INSERT INTO VALUES job
+
+**INSERT INTO VALUES**
+
+1. Create a source table
+
+```SQL
+CREATE TABLE testdb.test_table(
+    user_id            BIGINT       NOT NULL COMMENT "User ID",
+    name               VARCHAR(20)           COMMENT "User name",
+    age                INT                   COMMENT "User age"
+)
+DUPLICATE KEY(user_id)
+DISTRIBUTED BY HASH(user_id) BUCKETS 10;
+```
+
+2. Import data into the source table using `INSERT INTO VALUES` (not 
recommended for production environments).
+
+```SQL
+INSERT INTO testdb.test_table (user_id, name, age)
+VALUES (1, "Emily", 25),
+       (2, "Benjamin", 35),
+       (3, "Olivia", 28),
+       (4, "Alexander", 60),
+       (5, "Ava", 17);
+```
+
+INSERT INTO VALUES is a synchronous import method, where the import result is 
directly returned to the user.
+
+```JSON
+Query OK, 5 rows affected (0.308 sec)
+{'label':'label_26eebc33411f441c_b2b286730d495e2c', 'status':'VISIBLE', 
'txnId':'61071'}
+```
+
+3. View imported data.
+
+```SQL
+MySQL> SELECT COUNT(*) FROM testdb.test_table;
++----------+
+| count(*) |
++----------+
+|        5 |
++----------+
+1 row in set (0.179 sec)
+```
+
+### View INSERT INTO VALUES jobs
+
+You can use the `SHOW LOAD` command to view the completed INSERT INTO VALUES 
tasks.
+
+```SQL
+mysql> SHOW LOAD FROM testdb\G
+*************************** 1. row ***************************
+         JobId: 77172
+         Label: label_26eebc33411f441c_b2b286730d495e2c
+         State: FINISHED
+      Progress: Unknown id: 77172
+          Type: INSERT
+       EtlInfo: NULL
+      TaskInfo: cluster:N/A; timeout(s):14400; max_filter_ratio:0.0
+      ErrorMsg: NULL
+    CreateTime: 2024-11-20 16:44:08
+  EtlStartTime: 2024-11-20 16:44:08
+ EtlFinishTime: 2024-11-20 16:44:08
+ LoadStartTime: 2024-11-20 16:44:08
+LoadFinishTime: 2024-11-20 16:44:08
+           URL: 
+    JobDetails: {"Unfinished 
backends":{},"ScannedRows":0,"TaskNumber":0,"LoadBytes":0,"All 
backends":{},"FileNumber":0,"FileSize":0}
+ TransactionId: 61071
+  ErrorTablets: {}
+          User: root
+       Comment: 
+1 row in set (0.00 sec)
+```
+
+### Cancel INSERT INTO VALUES jobs
+
+You can cancel the currently executing INSERT INTO VALUES job via Ctrl-C.
+
+## Manual
+
+### Syntax
+
+INSERT INTO VALUES is typically used for testing purposes. It is not 
recommended for production environments.
+
+```SQL
+INSERT INTO target_table (col1, col2, ...)
+VALUES (val1, val2, ...), (val3, val4, ...), ...;
+```
+
+### Parameter configuration
+
+**FE** **configuration**
+
+insert_load_default_timeout_second
+
+- Default value: 14400s (4 hours)
+- Description: Timeout for import tasks, measured in seconds. If the import 
task does not complete within this timeout period, it will be canceled by the 
system and marked as CANCELLED.
+
+**Environment parameters**
+
+insert_timeout
+
+- Default value: 14400s (4 hours)
+- Description: Timeout for INSERT INTO VALUES as an SQL statement, measured in 
seconds. 
+
+enable_insert_strict
+
+- Default value: true
+- Description: If this is set to true, INSERT INTO VALUES will fail when the 
task involves invalid data. If set to false, INSERT INTO VALUES will ignore 
invalid rows, and the import will be considered successful as long as at least 
one row is imported successfully.
+- Explanation: Until version 2.1.4, INSERT INTO VALUES cannot control the 
error rate, so this parameter is used to either strictly check data quality or 
completely ignore invalid data. Common reasons for data invalidity include: 
source data column length exceeding destination column length, column type 
mismatch, partition mismatch, and column order mismatch.
+
+insert_max_filter_ratio
+
+- Default value: 1.0
+
+- Description: Since version 2.1.5. Only effective when `enable_insert_strict` 
is false. Used to control the error tolerance when using `INSERT INTO VALUES`. 
The default value is 1.0, which means all errors are tolerated. It can be a 
decimal between 0 and 1. It means that when the number of error rows exceeds 
this ratio, the INSERT task will fail.
+
+### Return values
+
+INSERT INTO VALUES is a SQL statement, and it returns a JSON string in its 
results.
+
+Parameters in the JSON string:
+
+| Parameter | Description                                                  |
+| --------- | ------------------------------------------------------------ |
+| Label     | Label of the import job: can be specified using "INSERT INTO tbl 
WITH LABEL label..." |
+| Status    | Visibility of the imported data: If it is visible, it will be 
displayed as "visible." If not, it will be displayed as "committed." In the 
"committed" state, the import is completed, but the data may be delayed in 
becoming visible. There is no need to retry in this case.`visible`: The import 
is successful and the data is visible.`committed`: The import is completed, but 
the data may be delayed in becoming visible. There is no need to retry in this 
case.Label Already Exists:  [...]
+| Err       | Error message                                                |
+| TxnId     | ID of the import transaction                                 |
+
+**Successful INSERT**
+
+```sql
+mysql> INSERT INTO test_table (user_id, name, age) VALUES (1, "Emily", 25), 
(2, "Benjamin", 35), (3, "Olivia", 28), (NULL, "Alexander", 60), (5, "Ava", 17);
+Query OK, 5 rows affected (0.05 sec)
+{'label':'label_26eebc33411f441c_b2b286730d495e2c', 'status':'VISIBLE', 
'txnId':'61071'}
+```
+
+`Query OK` indicates successful execution. `5 rows affected` indicates that a 
total of 5 rows of data have been imported. 
+
+**Successful INSERT with warnings**
+
+```sql
+mysql> INSERT INTO test_table (user_id, name, age) VALUES (1, "Emily", 25), 
(2, "Benjamin", 35), (3, "Olivia", 28), (NULL, "Alexander", 60), (5, "Ava", 17);
+Query OK, 4 rows affected, 1 warning (0.04 sec)
+{'label':'label_a8d99ae931194d2b_93357aac59981a18', 'status':'VISIBLE', 
'txnId':'61068'}
+```
+
+`Query OK` indicates successful execution. `4 rows affected` indicates that a 
total of 4 rows of data have been imported. `1 warnings` indicates the number 
of rows that were filtered out. 
+
+You can use the [SHOW 
LOAD](../../../sql-manual/sql-statements/data-modification/load-and-export/SHOW-LOAD.md)
 statement to view the filtered rows.
+
+The result of this statement will include a URL that can be used to query the 
error data. For more details, refer to the "View error rows" section below.
+
+```sql
+mysql> SHOW LOAD WHERE label="label_a8d99ae931194d2b_93357aac59981a18"\G
+*************************** 1. row ***************************
+         JobId: 77158
+         Label: label_a8d99ae931194d2b_93357aac59981a18
+         State: FINISHED
+      Progress: Unknown id: 77158
+          Type: INSERT
+       EtlInfo: NULL
+      TaskInfo: cluster:N/A; timeout(s):14400; max_filter_ratio:0.0
+      ErrorMsg: NULL
+    CreateTime: 2024-11-20 16:35:40
+  EtlStartTime: 2024-11-20 16:35:40
+ EtlFinishTime: 2024-11-20 16:35:40
+ LoadStartTime: 2024-11-20 16:35:40
+LoadFinishTime: 2024-11-20 16:35:40
+           URL: 
http://10.16.10.7:8743/api/_load_error_log?file=__shard_18/error_log_insert_stmt_a8d99ae931194d2b-93357aac59981a19_a8d99ae931194d2b_93357aac59981a19
+    JobDetails: {"Unfinished 
backends":{},"ScannedRows":0,"TaskNumber":0,"LoadBytes":0,"All 
backends":{},"FileNumber":0,"FileSize":0}
+ TransactionId: 61068
+  ErrorTablets: {}
+          User: root
+       Comment: 
+1 row in set (0.00 sec)
+```
+
+**Successful INSERT with committed status**
+
+```sql
+mysql> INSERT INTO test_table (user_id, name, age) VALUES (1, "Emily", 25), 
(2, "Benjamin", 35), (3, "Olivia", 28), (4, "Alexander", 60), (5, "Ava", 17);
+Query OK, 5 rows affected (0.04 sec)
+{'label':'label_78bf5396d9594d4d_a8d9a914af40f73d', 'status':'COMMITTED', 
'txnId':'61074'}
+```
+
+The invisible state of data is temporary, and the data will eventually become 
visible. 
+
+You can check the visibility status of a batch of data using the [SHOW 
TRANSACTION](../../../sql-manual/sql-statements/Show-Statements/SHOW-TRANSACTION/)
 statement.
+
+If the `TransactionStatus` column in the result is `visible`, it indicates 
that the data is visible.
+
+```sql
+mysql> SHOW TRANSACTION WHERE id=61074\G
+*************************** 1. row ***************************
+     TransactionId: 61074
+             Label: label_78bf5396d9594d4d_a8d9a914af40f73d
+       Coordinator: FE: 10.16.10.7
+ TransactionStatus: VISIBLE
+ LoadJobSourceType: INSERT_STREAMING
+       PrepareTime: 2024-11-20 16:51:54
+     PreCommitTime: NULL
+        CommitTime: 2024-11-20 16:51:54
+       PublishTime: 2024-11-20 16:51:54
+        FinishTime: 2024-11-20 16:51:54
+            Reason: 
+ErrorReplicasCount: 0
+        ListenerId: -1
+         TimeoutMs: 14400000
+            ErrMsg: 
+1 row in set (0.00 sec)
+```
+
+**Non-empty result set but failed INSERT**
+
+Failed execution means that no data was successfully imported. An error 
message will be returned:
+
+```sql
+mysql> INSERT INTO test_table (user_id, name, age) VALUES (1, "Emily", 25), 
(2, "Benjamin", 35), (3, "Olivia", 28), (NULL, "Alexander", 60), (5, "Ava", 17);
+ERROR 1105 (HY000): errCode = 2, detailMessage = Insert has too many filtered 
data 1/5 insert_max_filter_ratio is 0.100000. url: 
http://10.16.10.7:8747/api/_load_error_log?file=__shard_22/error_log_insert_stmt_5fafe6663e1a45e0-a666c1722ffc8c55_5fafe6663e1a45e0_a666c1722ffc8c55
+```
+
+`ERROR 1105 (HY000): errCode = 2, detailMessage = Insert has too many filtered 
data 1/5 insert_max_filter_ratio is 0.100000.` indicates the root cause for the 
failure. The URL provided in the error message can be used to locate the error 
data. For more details, refer to the "View error rows" section below.
+
+## Best practice
+
+### Data size
+
+INSERT INTO VALUES is usually used for test and demo, it is not recommended to 
load large quantity data with INSERT INTO VALUES.
+
+### View error rows
+
+When the INSERT INTO result includes a URL field, you can use the following 
command to view the error rows:
+
+```SQL
+SHOW LOAD WARNINGS ON "url";
+```
+
+Example:
+
+```sql
+mysql> SHOW LOAD WARNINGS ON 
"http://10.16.10.7:8743/api/_load_error_log?file=__shard_18/error_log_insert_stmt_a8d99ae931194d2b-93357aac59981a19_a8d99ae931194d2b_93357aac59981a19"\G
+*************************** 1. row ***************************
+         JobId: -1
+         Label: NULL
+ErrorMsgDetail: Reason: column_name[user_id], null value for not null column, 
type=BIGINT. src line []; 
+1 row in set (0.00 sec)
+```
+
+Common reasons for errors include: source data column length exceeding 
destination column length, column type mismatch, partition mismatch, and column 
order mismatch.
+
+You can control whether INSERT INTO ignores error rows by configuring the 
environment variable `enable_insert_strict`.
+
+## More help
+
+For more detailed syntax on INSERT INTO, refer to the [INSERT 
INTO](../../../sql-manual/sql-statements/data-modification/DML/INSERT.md) 
command manual. You can also type `HELP INSERT` at the MySQL client command 
line for further information.
diff --git 
a/versioned_docs/version-3.0/data-operate/import/import-way/insert-into-manual.md
 
b/versioned_docs/version-3.0/data-operate/import/import-way/insert-into-manual.md
index 87ab3394aa9..a3869067dc8 100644
--- 
a/versioned_docs/version-3.0/data-operate/import/import-way/insert-into-manual.md
+++ 
b/versioned_docs/version-3.0/data-operate/import/import-way/insert-into-manual.md
@@ -26,14 +26,10 @@ under the License.
 
 The INSERT INTO statement supports importing the results of a Doris query into 
another table. INSERT INTO is a synchronous import method, where the import 
result is returned after the import is executed. Whether the import is 
successful can be determined based on the returned result. INSERT INTO ensures 
the atomicity of the import task, meaning that either all the data is imported 
successfully or none of it is imported.
 
-There are primarily two main forms of the INSERT INTO command:
-
 - INSERT INTO tbl SELECT...
-- INSERT INTO tbl (col1, col2, ...) VALUES (1, 2, ...), (1,3, ...)
 
 ## Applicable scenarios
 
-1. If a user wants to import only a few test data records to verify the 
functionality of the Doris system, the INSERT INTO VALUES syntax is applicable. 
It is similar to the MySQL syntax. However, it is not recommended to use INSERT 
INTO VALUES in a production environment.
 2. If a user wants to perform ETL on existing data in a Doris table and then 
import it into a new Doris table, the INSERT INTO SELECT syntax is applicable.
 3. In conjunction with the Multi-Catalog external table mechanism, tables from 
MySQL or Hive systems can be mapped via Multi-Catalog. Then, data from external 
tables can be imported into Doris tables using the INSERT INTO SELECT syntax.
 4. Utilizing the Table Value Functions (TVFs), users can directly query data 
stored in object storage or files on HDFS as tables, with automatic column type 
inference. Then, data from external tables can be imported into Doris tables 
using the INSERT INTO SELECT syntax.
@@ -54,8 +50,6 @@ INSERT INTO requires INSERT permissions on the target table. 
You can grant permi
 
 ### Create an INSERT INTO job
 
-**INSERT INTO VALUES**
-
 1. Create a source table
 
 ```SQL
@@ -68,7 +62,7 @@ DUPLICATE KEY(user_id)
 DISTRIBUTED BY HASH(user_id) BUCKETS 10;
 ```
 
-2. Import data into the source table using `INSERT INTO VALUES` (not 
recommended for production environments).
+2. Import data into the source table using any load method. (Here we use 
`INSERT INTO VALUES` for example).
 
 ```SQL
 INSERT INTO testdb.test_table (user_id, name, age)
@@ -79,34 +73,13 @@ VALUES (1, "Emily", 25),
        (5, "Ava", 17);
 ```
 
-INSERT INTO is a synchronous import method, where the import result is 
directly returned to the user. You can enable [group 
commit](../import-way/group-commit-manual.md) to achieve high performance. 
-
-```JSON
-Query OK, 5 rows affected (0.308 sec)
-{'label':'label_3e52da787aab4222_9126d2fce8f6d1e5', 'status':'VISIBLE', 
'txnId':'9081'}
-```
-
-3. View imported data.
-
-```SQL
-MySQL> SELECT COUNT(*) FROM testdb.test_table;
-+----------+
-| count(*) |
-+----------+
-|        5 |
-+----------+
-1 row in set (0.179 sec)
-```
-
-**INSERT INTO SELECT**
-
-1. Building upon the above operations, create a new table as the target table 
(with the same schema as the source table).
+3. Building upon the above operations, create a new table as the target table 
(with the same schema as the source table).
 
 ```SQL
 CREATE TABLE testdb.test_table2 LIKE testdb.test_table;
 ```
 
-2. Ingest data into the new table using `INSERT INTO SELECT`.
+4. Ingest data into the new table using `INSERT INTO SELECT`.
 
 ```SQL
 INSERT INTO testdb.test_table2
@@ -115,21 +88,23 @@ Query OK, 3 rows affected (0.544 sec)
 {'label':'label_9c2bae970023407d_b2c5b78b368e78a7', 'status':'VISIBLE', 
'txnId':'9084'}
 ```
 
-3. View imported data.
+5. View imported data.
 
 ```SQL
-MySQL> SELECT COUNT(*) FROM testdb.test_table2;
-+----------+
-| count(*) |
-+----------+
-|        3 |
-+----------+
-1 row in set (0.071 sec)
+MySQL> SELECT * FROM testdb.test_table2 ORDER BY age;
++---------+--------+------+
+| user_id | name   | age  |
++---------+--------+------+
+|       5 | Ava    |   17 |
+|       1 | Emily  |   25 |
+|       3 | Olivia |   28 |
++---------+--------+------+
+3 rows in set (0.02 sec)
 ```
 
-4. You can use [JOB](../../scheduler/job-scheduler.md) make the INSERT 
operation execute asynchronously.
+6. You can use [JOB](../../scheduler/job-scheduler.md) make the INSERT 
operation execute asynchronously.
 
-5. Sources can be [tvf](../../../lakehouse/file.md) or tables in a 
[catalog](../../../lakehouse/database/jdbc).
+7. Sources can be [tvf](../../../lakehouse/file.md) or tables in a 
[catalog](../../../lakehouse/database).
 
 ### View INSERT INTO jobs
 
@@ -165,15 +140,6 @@ INSERT INTO target_table SELECT ... FROM source_table;
 
 The SELECT statement above is similar to a regular SELECT query, allowing 
operations such as WHERE and JOIN.
 
-2. INSERT INTO VALUES
-
-INSERT INTO VALUES is typically used for testing purposes. It is not 
recommended for production environments.
-
-```SQL
-INSERT INTO target_table (col1, col2, ...)
-VALUES (val1, val2, ...), (val3, val4, ...), ...;
-```
-
 ### Parameter configuration
 
 **FE** **configuration**
@@ -194,7 +160,7 @@ enable_insert_strict
 
 - Default value: true
 - Description: If this is set to true, INSERT INTO will fail when the task 
involves invalid data. If set to false, INSERT INTO will ignore invalid rows, 
and the import will be considered successful as long as at least one row is 
imported successfully.
-- Explanation: INSERT INTO cannot control the error rate, so this parameter is 
used to either strictly check data quality or completely ignore invalid data. 
Common reasons for data invalidity include: source data column length exceeding 
destination column length, column type mismatch, partition mismatch, and column 
order mismatch.
+- Explanation: Until version 2.1.4. INSERT INTO cannot control the error rate, 
so this parameter is used to either strictly check data quality or completely 
ignore invalid data. Common reasons for data invalidity include: source data 
column length exceeding destination column length, column type mismatch, 
partition mismatch, and column order mismatch.
 
 insert_max_filter_ratio
 
diff --git 
a/versioned_docs/version-3.0/data-operate/import/import-way/insert-into-values-manual.md
 
b/versioned_docs/version-3.0/data-operate/import/import-way/insert-into-values-manual.md
new file mode 100644
index 00000000000..7e43d6ae427
--- /dev/null
+++ 
b/versioned_docs/version-3.0/data-operate/import/import-way/insert-into-values-manual.md
@@ -0,0 +1,305 @@
+---
+{
+    "title": "Insert Into Values",
+    "language": "en"
+}
+---
+
+<!-- 
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+The INSERT INTO VALUES statement supports importing the results of a Doris 
query into another table. INSERT INTO VALUES is a synchronous import method, 
where the import result is returned after the import is executed. Whether the 
import is successful can be determined based on the returned result. INSERT 
INTO VALUES ensures the atomicity of the import task, meaning that either all 
the data is imported successfully or none of it is imported.
+
+- INSERT INTO tbl (col1, col2, ...) VALUES (1, 2, ...), (1,3, ...)
+
+## Applicable scenarios
+
+1. If a user wants to import only a few test data records to verify the 
functionality of the Doris system, the INSERT INTO VALUES syntax is applicable. 
It is similar to the MySQL syntax. However, it is not recommended to use INSERT 
INTO VALUES in a production environment.
+2. The performance of concurrent INSERT INTO VALUES jobs will be bottlenecked 
by commit stage. When loading large quantity of data, you can enable [group 
commit](../import-way/group-commit-manual.md) to achieve high performance. 
+
+## Implementation
+
+When using INSERT INTO VALUES, the import job needs to be initiated and 
submitted to the FE node using the MySQL protocol. The FE generates an 
execution plan, which includes query-related operators, with the last operator 
being the OlapTableSink. The OlapTableSink operator is responsible for writing 
the query result to the target table. The execution plan is then sent to the BE 
nodes for execution. Doris designates one BE node as the Coordinator, which 
receives and distributes the data t [...]
+
+## Get started
+
+An INSERT INTO VALUES job is submitted and transmitted using the MySQL 
protocol. The following example demonstrates submitting an import job using 
INSERT INTO VALUES through the MySQL command-line interface.
+
+### Preparation
+
+INSERT INTO VALUES requires INSERT permissions on the target table. You can 
grant permissions to user accounts using the GRANT command.
+
+### Create an INSERT INTO VALUES job
+
+**INSERT INTO VALUES**
+
+1. Create a source table
+
+```SQL
+CREATE TABLE testdb.test_table(
+    user_id            BIGINT       NOT NULL COMMENT "User ID",
+    name               VARCHAR(20)           COMMENT "User name",
+    age                INT                   COMMENT "User age"
+)
+DUPLICATE KEY(user_id)
+DISTRIBUTED BY HASH(user_id) BUCKETS 10;
+```
+
+2. Import data into the source table using `INSERT INTO VALUES` (not 
recommended for production environments).
+
+```SQL
+INSERT INTO testdb.test_table (user_id, name, age)
+VALUES (1, "Emily", 25),
+       (2, "Benjamin", 35),
+       (3, "Olivia", 28),
+       (4, "Alexander", 60),
+       (5, "Ava", 17);
+```
+
+INSERT INTO VALUES is a synchronous import method, where the import result is 
directly returned to the user.
+
+```JSON
+Query OK, 5 rows affected (0.308 sec)
+{'label':'label_26eebc33411f441c_b2b286730d495e2c', 'status':'VISIBLE', 
'txnId':'61071'}
+```
+
+3. View imported data.
+
+```SQL
+MySQL> SELECT COUNT(*) FROM testdb.test_table;
++----------+
+| count(*) |
++----------+
+|        5 |
++----------+
+1 row in set (0.179 sec)
+```
+
+### View INSERT INTO VALUES jobs
+
+You can use the `SHOW LOAD` command to view the completed INSERT INTO VALUES 
tasks.
+
+```SQL
+mysql> SHOW LOAD FROM testdb\G
+*************************** 1. row ***************************
+         JobId: 77172
+         Label: label_26eebc33411f441c_b2b286730d495e2c
+         State: FINISHED
+      Progress: Unknown id: 77172
+          Type: INSERT
+       EtlInfo: NULL
+      TaskInfo: cluster:N/A; timeout(s):14400; max_filter_ratio:0.0
+      ErrorMsg: NULL
+    CreateTime: 2024-11-20 16:44:08
+  EtlStartTime: 2024-11-20 16:44:08
+ EtlFinishTime: 2024-11-20 16:44:08
+ LoadStartTime: 2024-11-20 16:44:08
+LoadFinishTime: 2024-11-20 16:44:08
+           URL: 
+    JobDetails: {"Unfinished 
backends":{},"ScannedRows":0,"TaskNumber":0,"LoadBytes":0,"All 
backends":{},"FileNumber":0,"FileSize":0}
+ TransactionId: 61071
+  ErrorTablets: {}
+          User: root
+       Comment: 
+1 row in set (0.00 sec)
+```
+
+### Cancel INSERT INTO VALUES jobs
+
+You can cancel the currently executing INSERT INTO VALUES job via Ctrl-C.
+
+## Manual
+
+### Syntax
+
+INSERT INTO VALUES is typically used for testing purposes. It is not 
recommended for production environments.
+
+```SQL
+INSERT INTO target_table (col1, col2, ...)
+VALUES (val1, val2, ...), (val3, val4, ...), ...;
+```
+
+### Parameter configuration
+
+**FE** **configuration**
+
+insert_load_default_timeout_second
+
+- Default value: 14400s (4 hours)
+- Description: Timeout for import tasks, measured in seconds. If the import 
task does not complete within this timeout period, it will be canceled by the 
system and marked as CANCELLED.
+
+**Environment parameters**
+
+insert_timeout
+
+- Default value: 14400s (4 hours)
+- Description: Timeout for INSERT INTO VALUES as an SQL statement, measured in 
seconds. 
+
+enable_insert_strict
+
+- Default value: true
+- Description: If this is set to true, INSERT INTO VALUES will fail when the 
task involves invalid data. If set to false, INSERT INTO VALUES will ignore 
invalid rows, and the import will be considered successful as long as at least 
one row is imported successfully.
+- Explanation: Until version 2.1.4, INSERT INTO VALUES cannot control the 
error rate, so this parameter is used to either strictly check data quality or 
completely ignore invalid data. Common reasons for data invalidity include: 
source data column length exceeding destination column length, column type 
mismatch, partition mismatch, and column order mismatch.
+
+insert_max_filter_ratio
+
+- Default value: 1.0
+
+- Description: Since version 2.1.5. Only effective when `enable_insert_strict` 
is false. Used to control the error tolerance when using `INSERT INTO VALUES`. 
The default value is 1.0, which means all errors are tolerated. It can be a 
decimal between 0 and 1. It means that when the number of error rows exceeds 
this ratio, the INSERT task will fail.
+
+### Return values
+
+INSERT INTO VALUES is a SQL statement, and it returns a JSON string in its 
results.
+
+Parameters in the JSON string:
+
+| Parameter | Description                                                  |
+| --------- | ------------------------------------------------------------ |
+| Label     | Label of the import job: can be specified using "INSERT INTO tbl 
WITH LABEL label..." |
+| Status    | Visibility of the imported data: If it is visible, it will be 
displayed as "visible." If not, it will be displayed as "committed." In the 
"committed" state, the import is completed, but the data may be delayed in 
becoming visible. There is no need to retry in this case.`visible`: The import 
is successful and the data is visible.`committed`: The import is completed, but 
the data may be delayed in becoming visible. There is no need to retry in this 
case.Label Already Exists:  [...]
+| Err       | Error message                                                |
+| TxnId     | ID of the import transaction                                 |
+
+**Successful INSERT**
+
+```sql
+mysql> INSERT INTO test_table (user_id, name, age) VALUES (1, "Emily", 25), 
(2, "Benjamin", 35), (3, "Olivia", 28), (NULL, "Alexander", 60), (5, "Ava", 17);
+Query OK, 5 rows affected (0.05 sec)
+{'label':'label_26eebc33411f441c_b2b286730d495e2c', 'status':'VISIBLE', 
'txnId':'61071'}
+```
+
+`Query OK` indicates successful execution. `5 rows affected` indicates that a 
total of 5 rows of data have been imported. 
+
+**Successful INSERT with warnings**
+
+```sql
+mysql> INSERT INTO test_table (user_id, name, age) VALUES (1, "Emily", 25), 
(2, "Benjamin", 35), (3, "Olivia", 28), (NULL, "Alexander", 60), (5, "Ava", 17);
+Query OK, 4 rows affected, 1 warning (0.04 sec)
+{'label':'label_a8d99ae931194d2b_93357aac59981a18', 'status':'VISIBLE', 
'txnId':'61068'}
+```
+
+`Query OK` indicates successful execution. `4 rows affected` indicates that a 
total of 4 rows of data have been imported. `1 warnings` indicates the number 
of rows that were filtered out. 
+
+You can use the [SHOW 
LOAD](../../../sql-manual/sql-statements/data-modification/load-and-export/SHOW-LOAD.md)
 statement to view the filtered rows.
+
+The result of this statement will include a URL that can be used to query the 
error data. For more details, refer to the "View error rows" section below.
+
+```sql
+mysql> SHOW LOAD WHERE label="label_a8d99ae931194d2b_93357aac59981a18"\G
+*************************** 1. row ***************************
+         JobId: 77158
+         Label: label_a8d99ae931194d2b_93357aac59981a18
+         State: FINISHED
+      Progress: Unknown id: 77158
+          Type: INSERT
+       EtlInfo: NULL
+      TaskInfo: cluster:N/A; timeout(s):14400; max_filter_ratio:0.0
+      ErrorMsg: NULL
+    CreateTime: 2024-11-20 16:35:40
+  EtlStartTime: 2024-11-20 16:35:40
+ EtlFinishTime: 2024-11-20 16:35:40
+ LoadStartTime: 2024-11-20 16:35:40
+LoadFinishTime: 2024-11-20 16:35:40
+           URL: 
http://10.16.10.7:8743/api/_load_error_log?file=__shard_18/error_log_insert_stmt_a8d99ae931194d2b-93357aac59981a19_a8d99ae931194d2b_93357aac59981a19
+    JobDetails: {"Unfinished 
backends":{},"ScannedRows":0,"TaskNumber":0,"LoadBytes":0,"All 
backends":{},"FileNumber":0,"FileSize":0}
+ TransactionId: 61068
+  ErrorTablets: {}
+          User: root
+       Comment: 
+1 row in set (0.00 sec)
+```
+
+**Successful INSERT with committed status**
+
+```sql
+mysql> INSERT INTO test_table (user_id, name, age) VALUES (1, "Emily", 25), 
(2, "Benjamin", 35), (3, "Olivia", 28), (4, "Alexander", 60), (5, "Ava", 17);
+Query OK, 5 rows affected (0.04 sec)
+{'label':'label_78bf5396d9594d4d_a8d9a914af40f73d', 'status':'COMMITTED', 
'txnId':'61074'}
+```
+
+The invisible state of data is temporary, and the data will eventually become 
visible. 
+
+You can check the visibility status of a batch of data using the [SHOW 
TRANSACTION](../../../sql-manual/sql-statements/Show-Statements/SHOW-TRANSACTION/)
 statement.
+
+If the `TransactionStatus` column in the result is `visible`, it indicates 
that the data is visible.
+
+```sql
+mysql> SHOW TRANSACTION WHERE id=61074\G
+*************************** 1. row ***************************
+     TransactionId: 61074
+             Label: label_78bf5396d9594d4d_a8d9a914af40f73d
+       Coordinator: FE: 10.16.10.7
+ TransactionStatus: VISIBLE
+ LoadJobSourceType: INSERT_STREAMING
+       PrepareTime: 2024-11-20 16:51:54
+     PreCommitTime: NULL
+        CommitTime: 2024-11-20 16:51:54
+       PublishTime: 2024-11-20 16:51:54
+        FinishTime: 2024-11-20 16:51:54
+            Reason: 
+ErrorReplicasCount: 0
+        ListenerId: -1
+         TimeoutMs: 14400000
+            ErrMsg: 
+1 row in set (0.00 sec)
+```
+
+**Non-empty result set but failed INSERT**
+
+Failed execution means that no data was successfully imported. An error 
message will be returned:
+
+```sql
+mysql> INSERT INTO test_table (user_id, name, age) VALUES (1, "Emily", 25), 
(2, "Benjamin", 35), (3, "Olivia", 28), (NULL, "Alexander", 60), (5, "Ava", 17);
+ERROR 1105 (HY000): errCode = 2, detailMessage = Insert has too many filtered 
data 1/5 insert_max_filter_ratio is 0.100000. url: 
http://10.16.10.7:8747/api/_load_error_log?file=__shard_22/error_log_insert_stmt_5fafe6663e1a45e0-a666c1722ffc8c55_5fafe6663e1a45e0_a666c1722ffc8c55
+```
+
+`ERROR 1105 (HY000): errCode = 2, detailMessage = Insert has too many filtered 
data 1/5 insert_max_filter_ratio is 0.100000.` indicates the root cause for the 
failure. The URL provided in the error message can be used to locate the error 
data. For more details, refer to the "View error rows" section below.
+
+## Best practice
+
+### Data size
+
+INSERT INTO VALUES is usually used for test and demo, it is not recommended to 
load large quantity data with INSERT INTO VALUES.
+
+### View error rows
+
+When the INSERT INTO result includes a URL field, you can use the following 
command to view the error rows:
+
+```SQL
+SHOW LOAD WARNINGS ON "url";
+```
+
+Example:
+
+```sql
+mysql> SHOW LOAD WARNINGS ON 
"http://10.16.10.7:8743/api/_load_error_log?file=__shard_18/error_log_insert_stmt_a8d99ae931194d2b-93357aac59981a19_a8d99ae931194d2b_93357aac59981a19"\G
+*************************** 1. row ***************************
+         JobId: -1
+         Label: NULL
+ErrorMsgDetail: Reason: column_name[user_id], null value for not null column, 
type=BIGINT. src line []; 
+1 row in set (0.00 sec)
+```
+
+Common reasons for errors include: source data column length exceeding 
destination column length, column type mismatch, partition mismatch, and column 
order mismatch.
+
+You can control whether INSERT INTO ignores error rows by configuring the 
environment variable `enable_insert_strict`.
+
+## More help
+
+For more detailed syntax on INSERT INTO, refer to the [INSERT 
INTO](../../../sql-manual/sql-statements/data-modification/DML/INSERT.md) 
command manual. You can also type `HELP INSERT` at the MySQL client command 
line for further information.
diff --git a/versioned_sidebars/version-2.1-sidebars.json 
b/versioned_sidebars/version-2.1-sidebars.json
index e2a6c2b99da..80309bfc725 100644
--- a/versioned_sidebars/version-2.1-sidebars.json
+++ b/versioned_sidebars/version-2.1-sidebars.json
@@ -150,6 +150,7 @@
                                         
"data-operate/import/import-way/broker-load-manual",
                                         
"data-operate/import/import-way/routine-load-manual",
                                         
"data-operate/import/import-way/insert-into-manual",
+                                        
"data-operate/import/import-way/insert-into-values-manual",
                                         
"data-operate/import/import-way/mysql-load-manual",
                                         
"data-operate/import/import-way/group-commit-manual"
                                     ]
diff --git a/versioned_sidebars/version-3.0-sidebars.json 
b/versioned_sidebars/version-3.0-sidebars.json
index c88149fbe4d..3432f5bcf12 100644
--- a/versioned_sidebars/version-3.0-sidebars.json
+++ b/versioned_sidebars/version-3.0-sidebars.json
@@ -168,6 +168,7 @@
                                         
"data-operate/import/import-way/broker-load-manual",
                                         
"data-operate/import/import-way/routine-load-manual",
                                         
"data-operate/import/import-way/insert-into-manual",
+                                        
"data-operate/import/import-way/insert-into-values-manual",
                                         
"data-operate/import/import-way/mysql-load-manual",
                                         
"data-operate/import/import-way/group-commit-manual"
                                     ]


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org
For additional commands, e-mail: commits-h...@doris.apache.org

Reply via email to