This is an automated email from the ASF dual-hosted git repository.

morningman pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/doris-website.git


The following commit(s) were added to refs/heads/master by this push:
     new 594da165c80 [doc](typo) fix some typo in `update` section (#1267)
594da165c80 is described below

commit 594da165c80451dab827d99a6ddd1c8062fd7693
Author: yagagagaga <zhangminkefromflyd...@gmail.com>
AuthorDate: Mon Nov 4 20:54:40 2024 +0800

    [doc](typo) fix some typo in `update` section (#1267)
    
    # Versions
    
    - [x] dev
    - [x] 3.0
    - [x] 2.1
    - [x] 2.0
    
    # Languages
    
    - [x] Chinese
    - [x] English
---
 .../update/unique-update-transaction.md            | 30 +++++++++++-----------
 .../update/update-of-aggregate-model.md            |  8 ++++--
 docs/data-operate/update/update-of-unique-model.md |  2 +-
 .../update/unique-update-transaction.md            | 28 ++++++++++----------
 .../update/update-of-aggregate-model.md            |  8 ++++--
 .../data-operate/update/update-of-unique-model.md  |  4 +--
 .../update/unique-update-transaction.md            | 28 ++++++++++----------
 .../update/update-of-aggregate-model.md            |  8 ++++--
 .../data-operate/update/update-of-unique-model.md  |  6 ++---
 .../update/unique-update-transaction.md            | 28 ++++++++++----------
 .../update/update-of-aggregate-model.md            |  8 ++++--
 .../data-operate/update/update-of-unique-model.md  |  6 ++---
 .../update/unique-update-transaction.md            | 28 ++++++++++----------
 .../update/update-of-aggregate-model.md            |  8 ++++--
 .../data-operate/update/update-of-unique-model.md  |  6 ++---
 .../update/unique-update-transaction.md            | 30 +++++++++++-----------
 .../update/update-of-aggregate-model.md            |  8 ++++--
 .../data-operate/update/update-of-unique-model.md  |  4 +--
 .../update/unique-update-transaction.md            | 30 +++++++++++-----------
 .../update/update-of-aggregate-model.md            |  8 ++++--
 .../data-operate/update/update-of-unique-model.md  |  4 +--
 .../update/unique-update-transaction.md            | 30 +++++++++++-----------
 .../update/update-of-aggregate-model.md            |  8 ++++--
 .../data-operate/update/update-of-unique-model.md  |  4 +--
 24 files changed, 182 insertions(+), 150 deletions(-)

diff --git a/docs/data-operate/update/unique-update-transaction.md 
b/docs/data-operate/update/unique-update-transaction.md
index 479f1d06e00..a6422a7994c 100644
--- a/docs/data-operate/update/unique-update-transaction.md
+++ b/docs/data-operate/update/unique-update-transaction.md
@@ -201,7 +201,7 @@ PROPERTIES(
 Table structure:
 
 ```sql
-MySQL  desc test_table;
+MySQL > desc test_table;
 +-------------+--------------+------+-------+---------+---------+
 | Field       | Type         | Null | Key   | Default | Extra   |
 +-------------+--------------+------+-------+---------+---------+
@@ -218,12 +218,12 @@ MySQL  desc test_table;
 Load the following data:
 
 ```Plain
-1       2020-02-22      1       2020-02-21      a
-1       2020-02-22      1       2020-02-22      b
-1       2020-02-22      1       2020-03-05      c
-1       2020-02-22      1       2020-02-26      d
-1       2020-02-22      1       2020-02-23      e
-1       2020-02-22      1       2020-02-24      b
+1      2020-02-22      1       2020-02-21      a
+1      2020-02-22      1       2020-02-22      b
+1      2020-02-22      1       2020-03-05      c
+1      2020-02-22      1       2020-02-26      d
+1      2020-02-22      1       2020-02-23      e
+1      2020-02-22      1       2020-02-24      b
 ```
 
 Here is an example using Stream Load:
@@ -235,7 +235,7 @@ curl --location-trusted -u root: -T testData 
http://host:port/api/test/test_tabl
 The result is:
 
 ```sql
-MySQL  select * from test_table;
+MySQL > select * from test_table;
 +---------+------------+----------+-------------+---------+
 | user_id | date       | group_id | modify_date | keyword |
 +---------+------------+----------+-------------+---------+
@@ -250,14 +250,14 @@ In the data load, because the value of the sequence 
column (i.e., modify_date) '
 After completing the above steps, load the following data:
 
 ```Plain
-1       2020-02-22      1       2020-02-22      a
-1       2020-02-22      1       2020-02-23      b
+1      2020-02-22      1       2020-02-22      a
+1      2020-02-22      1       2020-02-23      b
 ```
 
 Query the data:
 
 ```sql
-MySQL [test] select * from test_table;
+MySQL [test] > select * from test_table;
 +---------+------------+----------+-------------+---------+
 | user_id | date       | group_id | modify_date | keyword |
 +---------+------------+----------+-------------+---------+
@@ -270,14 +270,14 @@ In the loaded data, the sequence column (modify_date) of 
all previously loaded d
 **4. Try loading the following data again**
 
 ```Plain
-1       2020-02-22      1       2020-02-22      a
-1       2020-02-22      1       2020-03-23      w
+1      2020-02-22      1       2020-02-22      a
+1      2020-02-22      1       2020-03-23      w
 ```
 
 Query the data:
 
 ```sql
-MySQL [test] select * from test_table;
+MySQL [test] > select * from test_table;
 +---------+------------+----------+-------------+---------+
 | user_id | date       | group_id | modify_date | keyword |
 +---------+------------+----------+-------------+---------+
@@ -297,4 +297,4 @@ Table test_tbl has sequence column, need to specify the 
sequence column
 
 2. Since version 2.0, Doris has supported partial column updates for 
Merge-on-Write implementation of Unique Key tables. In partial column update, 
users can update only a subset of columns each time, so it is not necessary to 
include the sequence column. If the loading task submitted by the user includes 
the sequence column, it has no effect. If the loading task submitted by the 
user does not include the sequence column, Doris will use the value of the 
matching sequence column from the h [...]
 
-3. In cases of concurrent data load, Doris utilizes MVCC (Multi-Version 
Concurrency Control) mechanism to ensure data correctness. If two batches of 
loaded data update different columns of the same key, the load task with a 
higher system version will reapply the data for the same key written by the 
load task with a lower version after the lower version load task succeeds.
\ No newline at end of file
+3. In cases of concurrent data load, Doris utilizes MVCC (Multi-Version 
Concurrency Control) mechanism to ensure data correctness. If two batches of 
loaded data update different columns of the same key, the load task with a 
higher system version will reapply the data for the same key written by the 
load task with a lower version after the lower version load task succeeds.
diff --git a/docs/data-operate/update/update-of-aggregate-model.md 
b/docs/data-operate/update/update-of-aggregate-model.md
index 945bc013c47..4b5ef906756 100644
--- a/docs/data-operate/update/update-of-aggregate-model.md
+++ b/docs/data-operate/update/update-of-aggregate-model.md
@@ -68,8 +68,12 @@ For Stream Load, Broker Load, Routine Load, or INSERT INTO, 
you can directly wri
 
 Using the same example as above, the corresponding Stream Load command would 
be (no additional headers required):
 
-```Plain
-curl  --location-trusted -u root: -H "column_separator:," -H 
"columns:order_id,order_status" -T /tmp/update.csv 
http://127.0.0.1:48037/api/db1/order_tbl/_stream_load
+```shell
+$ cat update.csv
+
+1,To be shipped
+
+$ curl  --location-trusted -u root: -H "column_separator:," -H 
"columns:order_id,order_status" -T /tmp/update.csv 
http://127.0.0.1:8030/api/db1/order_tbl/_stream_load
 ```
 
 The corresponding `INSERT INTO` statement would be (no additional session 
variables required):
diff --git a/docs/data-operate/update/update-of-unique-model.md 
b/docs/data-operate/update/update-of-unique-model.md
index 9287e03c693..9c4d7f20bbe 100644
--- a/docs/data-operate/update/update-of-unique-model.md
+++ b/docs/data-operate/update/update-of-unique-model.md
@@ -116,7 +116,7 @@ $ cat update.csv
 
 1,To be shipped
 
-$ curl --location-trusted -u root: -H "partial_columns:true" -H 
"column_separator:," -H "columns:order_id,order_status" -T /tmp/update.csv 
http://127.0.0.1:48037/api/db1/order_tbl/_stream_load
+$ curl --location-trusted -u root: -H "partial_columns:true" -H 
"column_separator:," -H "columns:order_id,order_status" -T /tmp/update.csv 
http://127.0.0.1:8030/api/db1/order_tbl/_stream_load
 ```
 
 If you are using `INSERT INTO`, you can update as following methods:
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-operate/update/unique-update-transaction.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-operate/update/unique-update-transaction.md
index d8722313fee..ec4669817db 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-operate/update/unique-update-transaction.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-operate/update/unique-update-transaction.md
@@ -217,12 +217,12 @@ MySQL  desc test_table;
 导入如下数据
 
 ```Plain
-1       2020-02-22      1       2020-02-21      a
-1       2020-02-22      1       2020-02-22      b
-1       2020-02-22      1       2020-03-05      c
-1       2020-02-22      1       2020-02-26      d
-1       2020-02-22      1       2020-02-23      e
-1       2020-02-22      1       2020-02-24      b
+1      2020-02-22      1       2020-02-21      a
+1      2020-02-22      1       2020-02-22      b
+1      2020-02-22      1       2020-03-05      c
+1      2020-02-22      1       2020-02-26      d
+1      2020-02-22      1       2020-02-23      e
+1      2020-02-22      1       2020-02-24      b
 ```
 
 此处以 stream load 为例
@@ -234,7 +234,7 @@ curl --location-trusted -u root: -T testData 
http://host:port/api/test/test_tabl
 结果为
 
 ```sql
-MySQL  select * from test_table;
+MySQL > select * from test_table;
 +---------+------------+----------+-------------+---------+
 | user_id | date       | group_id | modify_date | keyword |
 +---------+------------+----------+-------------+---------+
@@ -249,14 +249,14 @@ MySQL  select * from test_table;
 上述步骤完成后,接着导入如下数据
 
 ```Plain
-1       2020-02-22      1       2020-02-22      a
-1       2020-02-22      1       2020-02-23      b
+1      2020-02-22      1       2020-02-22      a
+1      2020-02-22      1       2020-02-23      b
 ```
 
 查询数据
 
 ```sql
-MySQL [test] select * from test_table;
+MySQL [test] > select * from test_table;
 +---------+------------+----------+-------------+---------+
 | user_id | date       | group_id | modify_date | keyword |
 +---------+------------+----------+-------------+---------+
@@ -269,14 +269,14 @@ MySQL [test] select * from test_table;
 **4. 再尝试导入如下数据**
 
 ```Plain
-1       2020-02-22      1       2020-02-22      a
-1       2020-02-22      1       2020-03-23      w
+1      2020-02-22      1       2020-02-22      a
+1      2020-02-22      1       2020-03-23      w
 ```
 
 查询数据
 
 ```sql
-MySQL [test] select * from test_table;
+MySQL [test] > select * from test_table;
 +---------+------------+----------+-------------+---------+
 | user_id | date       | group_id | modify_date | keyword |
 +---------+------------+----------+-------------+---------+
@@ -296,4 +296,4 @@ Table test_tbl has sequence column, need to specify the 
sequence column
 
 2. 自版本 2.0 起,Doris 对 Unique Key 表的 Merge-on-Write 
实现支持了部分列更新能力,在部分列更新导入中,用户每次可以只更新一部分列,因此并不是必须要包含 sequence 列。若用户提交的导入任务中,包含 
sequence 列,则行为无影响;若用户提交的导入任务不包含 sequence 列,Doris 会使用匹配的历史数据中的 sequence 
列作为更新后该行的 sequence 列的值。如果历史数据中不存在相同 key 的列,则会自动用 null 或默认值填充。
 
-3. 当出现并发导入时,Doris 会利用 MVCC 机制来保证数据的正确性。如果两批数据导入都更新了一个相同 key 
的不同列,则其中系统版本较高的导入任务会在版本较低的导入任务成功后,使用版本较低的导入任务写入的相同 key 的数据行重新进行补齐。
\ No newline at end of file
+3. 当出现并发导入时,Doris 会利用 MVCC 机制来保证数据的正确性。如果两批数据导入都更新了一个相同 key 
的不同列,则其中系统版本较高的导入任务会在版本较低的导入任务成功后,使用版本较低的导入任务写入的相同 key 的数据行重新进行补齐。
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-operate/update/update-of-aggregate-model.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-operate/update/update-of-aggregate-model.md
index 004a9bc01ff..fe2fbe3f98e 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-operate/update/update-of-aggregate-model.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-operate/update/update-of-aggregate-model.md
@@ -67,7 +67,11 @@ PROPERTIES (
 与前面例子相同,对应的 Stream Load 命令为(不需要额外的 header):
 
 ```shell
-curl  --location-trusted -u root: -H "column_separator:," -H 
"columns:order_id,order_status" -T /tmp/update.csv 
http://127.0.0.1:48037/api/db1/order_tbl/_stream_load
+$ cat update.csv
+
+1,To be shipped
+
+curl  --location-trusted -u root: -H "column_separator:," -H 
"columns:order_id,order_status" -T /tmp/update.csv 
http://127.0.0.1:8030/api/db1/order_tbl/_stream_load
 ```
 
 对应的`INSERT INTO`语句为(不需要额外设置 session variable):
@@ -80,4 +84,4 @@ INSERT INTO order_tbl (order_id, order_status) values 
(1,'待发货');
 
 Aggregate Key 
模型在写入过程中不做任何额外处理,所以写入性能不受影响,与普通的数据导入相同。但是在查询时进行聚合的代价较大,典型的聚合查询性能相比 Unique Key 
模型的 Merge-on-Write 实现会有 5-10 倍的下降。
 
-用户无法通过将某个字段由非 NULL 设置为 NULL,写入的 NULL 值在`REPLACE_IF_NOT_NULL`聚合函数的处理中会自动忽略。
\ No newline at end of file
+用户无法通过将某个字段由非 NULL 设置为 NULL,写入的 NULL 值在`REPLACE_IF_NOT_NULL`聚合函数的处理中会自动忽略。
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-operate/update/update-of-unique-model.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-operate/update/update-of-unique-model.md
index f6c45e9767a..0d862a86774 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-operate/update/update-of-unique-model.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-operate/update/update-of-unique-model.md
@@ -112,11 +112,11 @@ set enable_unique_key_partial_update=true
 若使用 StreamLoad 可以通过如下方式进行更新:
 
 ```sql
-$cat update.csv
+$ cat update.csv
 
 1,待发货
 
-$ curl  --location-trusted -u root: -H "partial_columns:true" -H 
"column_separator:," -H "columns:order_id,order_status" -T /tmp/update.csv 
http://127.0.0.1:48037/api/db1/order_tbl/_stream_load
+$ curl  --location-trusted -u root: -H "partial_columns:true" -H 
"column_separator:," -H "columns:order_id,order_status" -T /tmp/update.csv 
http://127.0.0.1:8030/api/db1/order_tbl/_stream_load
 ```
 
 若使用`INSRT INTO`可以通过如下方式进行更新:
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.0/data-operate/update/unique-update-transaction.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.0/data-operate/update/unique-update-transaction.md
index d8722313fee..ec4669817db 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.0/data-operate/update/unique-update-transaction.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.0/data-operate/update/unique-update-transaction.md
@@ -217,12 +217,12 @@ MySQL  desc test_table;
 导入如下数据
 
 ```Plain
-1       2020-02-22      1       2020-02-21      a
-1       2020-02-22      1       2020-02-22      b
-1       2020-02-22      1       2020-03-05      c
-1       2020-02-22      1       2020-02-26      d
-1       2020-02-22      1       2020-02-23      e
-1       2020-02-22      1       2020-02-24      b
+1      2020-02-22      1       2020-02-21      a
+1      2020-02-22      1       2020-02-22      b
+1      2020-02-22      1       2020-03-05      c
+1      2020-02-22      1       2020-02-26      d
+1      2020-02-22      1       2020-02-23      e
+1      2020-02-22      1       2020-02-24      b
 ```
 
 此处以 stream load 为例
@@ -234,7 +234,7 @@ curl --location-trusted -u root: -T testData 
http://host:port/api/test/test_tabl
 结果为
 
 ```sql
-MySQL  select * from test_table;
+MySQL > select * from test_table;
 +---------+------------+----------+-------------+---------+
 | user_id | date       | group_id | modify_date | keyword |
 +---------+------------+----------+-------------+---------+
@@ -249,14 +249,14 @@ MySQL  select * from test_table;
 上述步骤完成后,接着导入如下数据
 
 ```Plain
-1       2020-02-22      1       2020-02-22      a
-1       2020-02-22      1       2020-02-23      b
+1      2020-02-22      1       2020-02-22      a
+1      2020-02-22      1       2020-02-23      b
 ```
 
 查询数据
 
 ```sql
-MySQL [test] select * from test_table;
+MySQL [test] > select * from test_table;
 +---------+------------+----------+-------------+---------+
 | user_id | date       | group_id | modify_date | keyword |
 +---------+------------+----------+-------------+---------+
@@ -269,14 +269,14 @@ MySQL [test] select * from test_table;
 **4. 再尝试导入如下数据**
 
 ```Plain
-1       2020-02-22      1       2020-02-22      a
-1       2020-02-22      1       2020-03-23      w
+1      2020-02-22      1       2020-02-22      a
+1      2020-02-22      1       2020-03-23      w
 ```
 
 查询数据
 
 ```sql
-MySQL [test] select * from test_table;
+MySQL [test] > select * from test_table;
 +---------+------------+----------+-------------+---------+
 | user_id | date       | group_id | modify_date | keyword |
 +---------+------------+----------+-------------+---------+
@@ -296,4 +296,4 @@ Table test_tbl has sequence column, need to specify the 
sequence column
 
 2. 自版本 2.0 起,Doris 对 Unique Key 表的 Merge-on-Write 
实现支持了部分列更新能力,在部分列更新导入中,用户每次可以只更新一部分列,因此并不是必须要包含 sequence 列。若用户提交的导入任务中,包含 
sequence 列,则行为无影响;若用户提交的导入任务不包含 sequence 列,Doris 会使用匹配的历史数据中的 sequence 
列作为更新后该行的 sequence 列的值。如果历史数据中不存在相同 key 的列,则会自动用 null 或默认值填充。
 
-3. 当出现并发导入时,Doris 会利用 MVCC 机制来保证数据的正确性。如果两批数据导入都更新了一个相同 key 
的不同列,则其中系统版本较高的导入任务会在版本较低的导入任务成功后,使用版本较低的导入任务写入的相同 key 的数据行重新进行补齐。
\ No newline at end of file
+3. 当出现并发导入时,Doris 会利用 MVCC 机制来保证数据的正确性。如果两批数据导入都更新了一个相同 key 
的不同列,则其中系统版本较高的导入任务会在版本较低的导入任务成功后,使用版本较低的导入任务写入的相同 key 的数据行重新进行补齐。
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.0/data-operate/update/update-of-aggregate-model.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.0/data-operate/update/update-of-aggregate-model.md
index 004a9bc01ff..fe2fbe3f98e 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.0/data-operate/update/update-of-aggregate-model.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.0/data-operate/update/update-of-aggregate-model.md
@@ -67,7 +67,11 @@ PROPERTIES (
 与前面例子相同,对应的 Stream Load 命令为(不需要额外的 header):
 
 ```shell
-curl  --location-trusted -u root: -H "column_separator:," -H 
"columns:order_id,order_status" -T /tmp/update.csv 
http://127.0.0.1:48037/api/db1/order_tbl/_stream_load
+$ cat update.csv
+
+1,To be shipped
+
+curl  --location-trusted -u root: -H "column_separator:," -H 
"columns:order_id,order_status" -T /tmp/update.csv 
http://127.0.0.1:8030/api/db1/order_tbl/_stream_load
 ```
 
 对应的`INSERT INTO`语句为(不需要额外设置 session variable):
@@ -80,4 +84,4 @@ INSERT INTO order_tbl (order_id, order_status) values 
(1,'待发货');
 
 Aggregate Key 
模型在写入过程中不做任何额外处理,所以写入性能不受影响,与普通的数据导入相同。但是在查询时进行聚合的代价较大,典型的聚合查询性能相比 Unique Key 
模型的 Merge-on-Write 实现会有 5-10 倍的下降。
 
-用户无法通过将某个字段由非 NULL 设置为 NULL,写入的 NULL 值在`REPLACE_IF_NOT_NULL`聚合函数的处理中会自动忽略。
\ No newline at end of file
+用户无法通过将某个字段由非 NULL 设置为 NULL,写入的 NULL 值在`REPLACE_IF_NOT_NULL`聚合函数的处理中会自动忽略。
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.0/data-operate/update/update-of-unique-model.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.0/data-operate/update/update-of-unique-model.md
index 19f1725a7c1..486b2841003 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.0/data-operate/update/update-of-unique-model.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.0/data-operate/update/update-of-unique-model.md
@@ -112,11 +112,11 @@ set enable_unique_key_partial_update=true
 若使用 StreamLoad 可以通过如下方式进行更新:
 
 ```sql
-$cat update.csv
+$ cat update.csv
 
 1,待发货
 
-$ curl  --location-trusted -u root: -H "partial_columns:true" -H 
"column_separator:," -H "columns:order_id,order_status" -T /tmp/update.csv 
http://127.0.0.1:48037/api/db1/order_tbl/_stream_load
+$ curl  --location-trusted -u root: -H "partial_columns:true" -H 
"column_separator:," -H "columns:order_id,order_status" -T /tmp/update.csv 
http://127.0.0.1:8030/api/db1/order_tbl/_stream_load
 ```
 
 若使用`INSRT INTO`可以通过如下方式进行更新:
@@ -153,4 +153,4 @@ INSERT INTO order_tbl (order_id, order_status) values 
(1,'待发货');
 
 目前,同一批次数据写入任务(无论是导入任务还是`INSERT INTO`)的所有行只能更新相同的列,如果需要更新不同列的数据,则需要分不同的批次进行写入。
 
-在未来版本中,将支持灵活的列更新,用户可以在同一批导入中,每一行更新不同的列。
\ No newline at end of file
+在未来版本中,将支持灵活的列更新,用户可以在同一批导入中,每一行更新不同的列。
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/data-operate/update/unique-update-transaction.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/data-operate/update/unique-update-transaction.md
index d8722313fee..ec4669817db 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/data-operate/update/unique-update-transaction.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/data-operate/update/unique-update-transaction.md
@@ -217,12 +217,12 @@ MySQL  desc test_table;
 导入如下数据
 
 ```Plain
-1       2020-02-22      1       2020-02-21      a
-1       2020-02-22      1       2020-02-22      b
-1       2020-02-22      1       2020-03-05      c
-1       2020-02-22      1       2020-02-26      d
-1       2020-02-22      1       2020-02-23      e
-1       2020-02-22      1       2020-02-24      b
+1      2020-02-22      1       2020-02-21      a
+1      2020-02-22      1       2020-02-22      b
+1      2020-02-22      1       2020-03-05      c
+1      2020-02-22      1       2020-02-26      d
+1      2020-02-22      1       2020-02-23      e
+1      2020-02-22      1       2020-02-24      b
 ```
 
 此处以 stream load 为例
@@ -234,7 +234,7 @@ curl --location-trusted -u root: -T testData 
http://host:port/api/test/test_tabl
 结果为
 
 ```sql
-MySQL  select * from test_table;
+MySQL > select * from test_table;
 +---------+------------+----------+-------------+---------+
 | user_id | date       | group_id | modify_date | keyword |
 +---------+------------+----------+-------------+---------+
@@ -249,14 +249,14 @@ MySQL  select * from test_table;
 上述步骤完成后,接着导入如下数据
 
 ```Plain
-1       2020-02-22      1       2020-02-22      a
-1       2020-02-22      1       2020-02-23      b
+1      2020-02-22      1       2020-02-22      a
+1      2020-02-22      1       2020-02-23      b
 ```
 
 查询数据
 
 ```sql
-MySQL [test] select * from test_table;
+MySQL [test] > select * from test_table;
 +---------+------------+----------+-------------+---------+
 | user_id | date       | group_id | modify_date | keyword |
 +---------+------------+----------+-------------+---------+
@@ -269,14 +269,14 @@ MySQL [test] select * from test_table;
 **4. 再尝试导入如下数据**
 
 ```Plain
-1       2020-02-22      1       2020-02-22      a
-1       2020-02-22      1       2020-03-23      w
+1      2020-02-22      1       2020-02-22      a
+1      2020-02-22      1       2020-03-23      w
 ```
 
 查询数据
 
 ```sql
-MySQL [test] select * from test_table;
+MySQL [test] > select * from test_table;
 +---------+------------+----------+-------------+---------+
 | user_id | date       | group_id | modify_date | keyword |
 +---------+------------+----------+-------------+---------+
@@ -296,4 +296,4 @@ Table test_tbl has sequence column, need to specify the 
sequence column
 
 2. 自版本 2.0 起,Doris 对 Unique Key 表的 Merge-on-Write 
实现支持了部分列更新能力,在部分列更新导入中,用户每次可以只更新一部分列,因此并不是必须要包含 sequence 列。若用户提交的导入任务中,包含 
sequence 列,则行为无影响;若用户提交的导入任务不包含 sequence 列,Doris 会使用匹配的历史数据中的 sequence 
列作为更新后该行的 sequence 列的值。如果历史数据中不存在相同 key 的列,则会自动用 null 或默认值填充。
 
-3. 当出现并发导入时,Doris 会利用 MVCC 机制来保证数据的正确性。如果两批数据导入都更新了一个相同 key 
的不同列,则其中系统版本较高的导入任务会在版本较低的导入任务成功后,使用版本较低的导入任务写入的相同 key 的数据行重新进行补齐。
\ No newline at end of file
+3. 当出现并发导入时,Doris 会利用 MVCC 机制来保证数据的正确性。如果两批数据导入都更新了一个相同 key 
的不同列,则其中系统版本较高的导入任务会在版本较低的导入任务成功后,使用版本较低的导入任务写入的相同 key 的数据行重新进行补齐。
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/data-operate/update/update-of-aggregate-model.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/data-operate/update/update-of-aggregate-model.md
index 004a9bc01ff..fe2fbe3f98e 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/data-operate/update/update-of-aggregate-model.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/data-operate/update/update-of-aggregate-model.md
@@ -67,7 +67,11 @@ PROPERTIES (
 与前面例子相同,对应的 Stream Load 命令为(不需要额外的 header):
 
 ```shell
-curl  --location-trusted -u root: -H "column_separator:," -H 
"columns:order_id,order_status" -T /tmp/update.csv 
http://127.0.0.1:48037/api/db1/order_tbl/_stream_load
+$ cat update.csv
+
+1,To be shipped
+
+curl  --location-trusted -u root: -H "column_separator:," -H 
"columns:order_id,order_status" -T /tmp/update.csv 
http://127.0.0.1:8030/api/db1/order_tbl/_stream_load
 ```
 
 对应的`INSERT INTO`语句为(不需要额外设置 session variable):
@@ -80,4 +84,4 @@ INSERT INTO order_tbl (order_id, order_status) values 
(1,'待发货');
 
 Aggregate Key 
模型在写入过程中不做任何额外处理,所以写入性能不受影响,与普通的数据导入相同。但是在查询时进行聚合的代价较大,典型的聚合查询性能相比 Unique Key 
模型的 Merge-on-Write 实现会有 5-10 倍的下降。
 
-用户无法通过将某个字段由非 NULL 设置为 NULL,写入的 NULL 值在`REPLACE_IF_NOT_NULL`聚合函数的处理中会自动忽略。
\ No newline at end of file
+用户无法通过将某个字段由非 NULL 设置为 NULL,写入的 NULL 值在`REPLACE_IF_NOT_NULL`聚合函数的处理中会自动忽略。
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/data-operate/update/update-of-unique-model.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/data-operate/update/update-of-unique-model.md
index 19f1725a7c1..486b2841003 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/data-operate/update/update-of-unique-model.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/data-operate/update/update-of-unique-model.md
@@ -112,11 +112,11 @@ set enable_unique_key_partial_update=true
 若使用 StreamLoad 可以通过如下方式进行更新:
 
 ```sql
-$cat update.csv
+$ cat update.csv
 
 1,待发货
 
-$ curl  --location-trusted -u root: -H "partial_columns:true" -H 
"column_separator:," -H "columns:order_id,order_status" -T /tmp/update.csv 
http://127.0.0.1:48037/api/db1/order_tbl/_stream_load
+$ curl  --location-trusted -u root: -H "partial_columns:true" -H 
"column_separator:," -H "columns:order_id,order_status" -T /tmp/update.csv 
http://127.0.0.1:8030/api/db1/order_tbl/_stream_load
 ```
 
 若使用`INSRT INTO`可以通过如下方式进行更新:
@@ -153,4 +153,4 @@ INSERT INTO order_tbl (order_id, order_status) values 
(1,'待发货');
 
 目前,同一批次数据写入任务(无论是导入任务还是`INSERT INTO`)的所有行只能更新相同的列,如果需要更新不同列的数据,则需要分不同的批次进行写入。
 
-在未来版本中,将支持灵活的列更新,用户可以在同一批导入中,每一行更新不同的列。
\ No newline at end of file
+在未来版本中,将支持灵活的列更新,用户可以在同一批导入中,每一行更新不同的列。
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/data-operate/update/unique-update-transaction.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/data-operate/update/unique-update-transaction.md
index d8722313fee..ec4669817db 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/data-operate/update/unique-update-transaction.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/data-operate/update/unique-update-transaction.md
@@ -217,12 +217,12 @@ MySQL  desc test_table;
 导入如下数据
 
 ```Plain
-1       2020-02-22      1       2020-02-21      a
-1       2020-02-22      1       2020-02-22      b
-1       2020-02-22      1       2020-03-05      c
-1       2020-02-22      1       2020-02-26      d
-1       2020-02-22      1       2020-02-23      e
-1       2020-02-22      1       2020-02-24      b
+1      2020-02-22      1       2020-02-21      a
+1      2020-02-22      1       2020-02-22      b
+1      2020-02-22      1       2020-03-05      c
+1      2020-02-22      1       2020-02-26      d
+1      2020-02-22      1       2020-02-23      e
+1      2020-02-22      1       2020-02-24      b
 ```
 
 此处以 stream load 为例
@@ -234,7 +234,7 @@ curl --location-trusted -u root: -T testData 
http://host:port/api/test/test_tabl
 结果为
 
 ```sql
-MySQL  select * from test_table;
+MySQL > select * from test_table;
 +---------+------------+----------+-------------+---------+
 | user_id | date       | group_id | modify_date | keyword |
 +---------+------------+----------+-------------+---------+
@@ -249,14 +249,14 @@ MySQL  select * from test_table;
 上述步骤完成后,接着导入如下数据
 
 ```Plain
-1       2020-02-22      1       2020-02-22      a
-1       2020-02-22      1       2020-02-23      b
+1      2020-02-22      1       2020-02-22      a
+1      2020-02-22      1       2020-02-23      b
 ```
 
 查询数据
 
 ```sql
-MySQL [test] select * from test_table;
+MySQL [test] > select * from test_table;
 +---------+------------+----------+-------------+---------+
 | user_id | date       | group_id | modify_date | keyword |
 +---------+------------+----------+-------------+---------+
@@ -269,14 +269,14 @@ MySQL [test] select * from test_table;
 **4. 再尝试导入如下数据**
 
 ```Plain
-1       2020-02-22      1       2020-02-22      a
-1       2020-02-22      1       2020-03-23      w
+1      2020-02-22      1       2020-02-22      a
+1      2020-02-22      1       2020-03-23      w
 ```
 
 查询数据
 
 ```sql
-MySQL [test] select * from test_table;
+MySQL [test] > select * from test_table;
 +---------+------------+----------+-------------+---------+
 | user_id | date       | group_id | modify_date | keyword |
 +---------+------------+----------+-------------+---------+
@@ -296,4 +296,4 @@ Table test_tbl has sequence column, need to specify the 
sequence column
 
 2. 自版本 2.0 起,Doris 对 Unique Key 表的 Merge-on-Write 
实现支持了部分列更新能力,在部分列更新导入中,用户每次可以只更新一部分列,因此并不是必须要包含 sequence 列。若用户提交的导入任务中,包含 
sequence 列,则行为无影响;若用户提交的导入任务不包含 sequence 列,Doris 会使用匹配的历史数据中的 sequence 
列作为更新后该行的 sequence 列的值。如果历史数据中不存在相同 key 的列,则会自动用 null 或默认值填充。
 
-3. 当出现并发导入时,Doris 会利用 MVCC 机制来保证数据的正确性。如果两批数据导入都更新了一个相同 key 
的不同列,则其中系统版本较高的导入任务会在版本较低的导入任务成功后,使用版本较低的导入任务写入的相同 key 的数据行重新进行补齐。
\ No newline at end of file
+3. 当出现并发导入时,Doris 会利用 MVCC 机制来保证数据的正确性。如果两批数据导入都更新了一个相同 key 
的不同列,则其中系统版本较高的导入任务会在版本较低的导入任务成功后,使用版本较低的导入任务写入的相同 key 的数据行重新进行补齐。
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/data-operate/update/update-of-aggregate-model.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/data-operate/update/update-of-aggregate-model.md
index 004a9bc01ff..fe2fbe3f98e 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/data-operate/update/update-of-aggregate-model.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/data-operate/update/update-of-aggregate-model.md
@@ -67,7 +67,11 @@ PROPERTIES (
 与前面例子相同,对应的 Stream Load 命令为(不需要额外的 header):
 
 ```shell
-curl  --location-trusted -u root: -H "column_separator:," -H 
"columns:order_id,order_status" -T /tmp/update.csv 
http://127.0.0.1:48037/api/db1/order_tbl/_stream_load
+$ cat update.csv
+
+1,To be shipped
+
+curl  --location-trusted -u root: -H "column_separator:," -H 
"columns:order_id,order_status" -T /tmp/update.csv 
http://127.0.0.1:8030/api/db1/order_tbl/_stream_load
 ```
 
 对应的`INSERT INTO`语句为(不需要额外设置 session variable):
@@ -80,4 +84,4 @@ INSERT INTO order_tbl (order_id, order_status) values 
(1,'待发货');
 
 Aggregate Key 
模型在写入过程中不做任何额外处理,所以写入性能不受影响,与普通的数据导入相同。但是在查询时进行聚合的代价较大,典型的聚合查询性能相比 Unique Key 
模型的 Merge-on-Write 实现会有 5-10 倍的下降。
 
-用户无法通过将某个字段由非 NULL 设置为 NULL,写入的 NULL 值在`REPLACE_IF_NOT_NULL`聚合函数的处理中会自动忽略。
\ No newline at end of file
+用户无法通过将某个字段由非 NULL 设置为 NULL,写入的 NULL 值在`REPLACE_IF_NOT_NULL`聚合函数的处理中会自动忽略。
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/data-operate/update/update-of-unique-model.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/data-operate/update/update-of-unique-model.md
index 19f1725a7c1..486b2841003 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/data-operate/update/update-of-unique-model.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/data-operate/update/update-of-unique-model.md
@@ -112,11 +112,11 @@ set enable_unique_key_partial_update=true
 若使用 StreamLoad 可以通过如下方式进行更新:
 
 ```sql
-$cat update.csv
+$ cat update.csv
 
 1,待发货
 
-$ curl  --location-trusted -u root: -H "partial_columns:true" -H 
"column_separator:," -H "columns:order_id,order_status" -T /tmp/update.csv 
http://127.0.0.1:48037/api/db1/order_tbl/_stream_load
+$ curl  --location-trusted -u root: -H "partial_columns:true" -H 
"column_separator:," -H "columns:order_id,order_status" -T /tmp/update.csv 
http://127.0.0.1:8030/api/db1/order_tbl/_stream_load
 ```
 
 若使用`INSRT INTO`可以通过如下方式进行更新:
@@ -153,4 +153,4 @@ INSERT INTO order_tbl (order_id, order_status) values 
(1,'待发货');
 
 目前,同一批次数据写入任务(无论是导入任务还是`INSERT INTO`)的所有行只能更新相同的列,如果需要更新不同列的数据,则需要分不同的批次进行写入。
 
-在未来版本中,将支持灵活的列更新,用户可以在同一批导入中,每一行更新不同的列。
\ No newline at end of file
+在未来版本中,将支持灵活的列更新,用户可以在同一批导入中,每一行更新不同的列。
diff --git 
a/versioned_docs/version-2.0/data-operate/update/unique-update-transaction.md 
b/versioned_docs/version-2.0/data-operate/update/unique-update-transaction.md
index 479f1d06e00..a6422a7994c 100644
--- 
a/versioned_docs/version-2.0/data-operate/update/unique-update-transaction.md
+++ 
b/versioned_docs/version-2.0/data-operate/update/unique-update-transaction.md
@@ -201,7 +201,7 @@ PROPERTIES(
 Table structure:
 
 ```sql
-MySQL  desc test_table;
+MySQL > desc test_table;
 +-------------+--------------+------+-------+---------+---------+
 | Field       | Type         | Null | Key   | Default | Extra   |
 +-------------+--------------+------+-------+---------+---------+
@@ -218,12 +218,12 @@ MySQL  desc test_table;
 Load the following data:
 
 ```Plain
-1       2020-02-22      1       2020-02-21      a
-1       2020-02-22      1       2020-02-22      b
-1       2020-02-22      1       2020-03-05      c
-1       2020-02-22      1       2020-02-26      d
-1       2020-02-22      1       2020-02-23      e
-1       2020-02-22      1       2020-02-24      b
+1      2020-02-22      1       2020-02-21      a
+1      2020-02-22      1       2020-02-22      b
+1      2020-02-22      1       2020-03-05      c
+1      2020-02-22      1       2020-02-26      d
+1      2020-02-22      1       2020-02-23      e
+1      2020-02-22      1       2020-02-24      b
 ```
 
 Here is an example using Stream Load:
@@ -235,7 +235,7 @@ curl --location-trusted -u root: -T testData 
http://host:port/api/test/test_tabl
 The result is:
 
 ```sql
-MySQL  select * from test_table;
+MySQL > select * from test_table;
 +---------+------------+----------+-------------+---------+
 | user_id | date       | group_id | modify_date | keyword |
 +---------+------------+----------+-------------+---------+
@@ -250,14 +250,14 @@ In the data load, because the value of the sequence 
column (i.e., modify_date) '
 After completing the above steps, load the following data:
 
 ```Plain
-1       2020-02-22      1       2020-02-22      a
-1       2020-02-22      1       2020-02-23      b
+1      2020-02-22      1       2020-02-22      a
+1      2020-02-22      1       2020-02-23      b
 ```
 
 Query the data:
 
 ```sql
-MySQL [test] select * from test_table;
+MySQL [test] > select * from test_table;
 +---------+------------+----------+-------------+---------+
 | user_id | date       | group_id | modify_date | keyword |
 +---------+------------+----------+-------------+---------+
@@ -270,14 +270,14 @@ In the loaded data, the sequence column (modify_date) of 
all previously loaded d
 **4. Try loading the following data again**
 
 ```Plain
-1       2020-02-22      1       2020-02-22      a
-1       2020-02-22      1       2020-03-23      w
+1      2020-02-22      1       2020-02-22      a
+1      2020-02-22      1       2020-03-23      w
 ```
 
 Query the data:
 
 ```sql
-MySQL [test] select * from test_table;
+MySQL [test] > select * from test_table;
 +---------+------------+----------+-------------+---------+
 | user_id | date       | group_id | modify_date | keyword |
 +---------+------------+----------+-------------+---------+
@@ -297,4 +297,4 @@ Table test_tbl has sequence column, need to specify the 
sequence column
 
 2. Since version 2.0, Doris has supported partial column updates for 
Merge-on-Write implementation of Unique Key tables. In partial column update, 
users can update only a subset of columns each time, so it is not necessary to 
include the sequence column. If the loading task submitted by the user includes 
the sequence column, it has no effect. If the loading task submitted by the 
user does not include the sequence column, Doris will use the value of the 
matching sequence column from the h [...]
 
-3. In cases of concurrent data load, Doris utilizes MVCC (Multi-Version 
Concurrency Control) mechanism to ensure data correctness. If two batches of 
loaded data update different columns of the same key, the load task with a 
higher system version will reapply the data for the same key written by the 
load task with a lower version after the lower version load task succeeds.
\ No newline at end of file
+3. In cases of concurrent data load, Doris utilizes MVCC (Multi-Version 
Concurrency Control) mechanism to ensure data correctness. If two batches of 
loaded data update different columns of the same key, the load task with a 
higher system version will reapply the data for the same key written by the 
load task with a lower version after the lower version load task succeeds.
diff --git 
a/versioned_docs/version-2.0/data-operate/update/update-of-aggregate-model.md 
b/versioned_docs/version-2.0/data-operate/update/update-of-aggregate-model.md
index 373a890ad49..1a7dedcad7b 100644
--- 
a/versioned_docs/version-2.0/data-operate/update/update-of-aggregate-model.md
+++ 
b/versioned_docs/version-2.0/data-operate/update/update-of-aggregate-model.md
@@ -68,8 +68,12 @@ For Stream Load, Broker Load, Routine Load, or INSERT INTO, 
you can directly wri
 
 Using the same example as above, the corresponding Stream Load command would 
be (no additional headers required):
 
-```Plain
-curl  --location-trusted -u root: -H "column_separator:," -H 
"columns:order_id,order_status" -T /tmp/update.csv 
http://127.0.0.1:48037/api/db1/order_tbl/_stream_load
+```shell
+$ cat update.csv
+
+1,To be shipped
+
+$ curl  --location-trusted -u root: -H "column_separator:," -H 
"columns:order_id,order_status" -T /tmp/update.csv 
http://127.0.0.1:8030/api/db1/order_tbl/_stream_load
 ```
 
 The corresponding `INSERT INTO` statement would be (no additional session 
variables required):
diff --git 
a/versioned_docs/version-2.0/data-operate/update/update-of-unique-model.md 
b/versioned_docs/version-2.0/data-operate/update/update-of-unique-model.md
index d5090219e20..ec29714c869 100644
--- a/versioned_docs/version-2.0/data-operate/update/update-of-unique-model.md
+++ b/versioned_docs/version-2.0/data-operate/update/update-of-unique-model.md
@@ -116,7 +116,7 @@ $ cat update.csv
 
 1,To be shipped
 
-$ curl --location-trusted -u root: -H "partial_columns:true" -H 
"column_separator:," -H "columns:order_id,order_status" -T /tmp/update.csv 
http://127.0.0.1:48037/api/db1/order_tbl/_stream_load
+$ curl --location-trusted -u root: -H "partial_columns:true" -H 
"column_separator:," -H "columns:order_id,order_status" -T /tmp/update.csv 
http://127.0.0.1:8030/api/db1/order_tbl/_stream_load
 ```
 
 If you are using `INSERT INTO`, you can update as following methods:
@@ -155,4 +155,4 @@ Suggestions for improving load performance:
 
 Now, all rows in a batch write task (whether it is an load task or `INSERT 
INTO`) can only update the same columns. If you need to update different 
columns, you will need to perform separate batch writes.
 
-In the future, flexible column updates will be supported, allowing users to 
update different columns for each row within the same batch load.
\ No newline at end of file
+In the future, flexible column updates will be supported, allowing users to 
update different columns for each row within the same batch load.
diff --git 
a/versioned_docs/version-2.1/data-operate/update/unique-update-transaction.md 
b/versioned_docs/version-2.1/data-operate/update/unique-update-transaction.md
index 479f1d06e00..a6422a7994c 100644
--- 
a/versioned_docs/version-2.1/data-operate/update/unique-update-transaction.md
+++ 
b/versioned_docs/version-2.1/data-operate/update/unique-update-transaction.md
@@ -201,7 +201,7 @@ PROPERTIES(
 Table structure:
 
 ```sql
-MySQL  desc test_table;
+MySQL > desc test_table;
 +-------------+--------------+------+-------+---------+---------+
 | Field       | Type         | Null | Key   | Default | Extra   |
 +-------------+--------------+------+-------+---------+---------+
@@ -218,12 +218,12 @@ MySQL  desc test_table;
 Load the following data:
 
 ```Plain
-1       2020-02-22      1       2020-02-21      a
-1       2020-02-22      1       2020-02-22      b
-1       2020-02-22      1       2020-03-05      c
-1       2020-02-22      1       2020-02-26      d
-1       2020-02-22      1       2020-02-23      e
-1       2020-02-22      1       2020-02-24      b
+1      2020-02-22      1       2020-02-21      a
+1      2020-02-22      1       2020-02-22      b
+1      2020-02-22      1       2020-03-05      c
+1      2020-02-22      1       2020-02-26      d
+1      2020-02-22      1       2020-02-23      e
+1      2020-02-22      1       2020-02-24      b
 ```
 
 Here is an example using Stream Load:
@@ -235,7 +235,7 @@ curl --location-trusted -u root: -T testData 
http://host:port/api/test/test_tabl
 The result is:
 
 ```sql
-MySQL  select * from test_table;
+MySQL > select * from test_table;
 +---------+------------+----------+-------------+---------+
 | user_id | date       | group_id | modify_date | keyword |
 +---------+------------+----------+-------------+---------+
@@ -250,14 +250,14 @@ In the data load, because the value of the sequence 
column (i.e., modify_date) '
 After completing the above steps, load the following data:
 
 ```Plain
-1       2020-02-22      1       2020-02-22      a
-1       2020-02-22      1       2020-02-23      b
+1      2020-02-22      1       2020-02-22      a
+1      2020-02-22      1       2020-02-23      b
 ```
 
 Query the data:
 
 ```sql
-MySQL [test] select * from test_table;
+MySQL [test] > select * from test_table;
 +---------+------------+----------+-------------+---------+
 | user_id | date       | group_id | modify_date | keyword |
 +---------+------------+----------+-------------+---------+
@@ -270,14 +270,14 @@ In the loaded data, the sequence column (modify_date) of 
all previously loaded d
 **4. Try loading the following data again**
 
 ```Plain
-1       2020-02-22      1       2020-02-22      a
-1       2020-02-22      1       2020-03-23      w
+1      2020-02-22      1       2020-02-22      a
+1      2020-02-22      1       2020-03-23      w
 ```
 
 Query the data:
 
 ```sql
-MySQL [test] select * from test_table;
+MySQL [test] > select * from test_table;
 +---------+------------+----------+-------------+---------+
 | user_id | date       | group_id | modify_date | keyword |
 +---------+------------+----------+-------------+---------+
@@ -297,4 +297,4 @@ Table test_tbl has sequence column, need to specify the 
sequence column
 
 2. Since version 2.0, Doris has supported partial column updates for 
Merge-on-Write implementation of Unique Key tables. In partial column update, 
users can update only a subset of columns each time, so it is not necessary to 
include the sequence column. If the loading task submitted by the user includes 
the sequence column, it has no effect. If the loading task submitted by the 
user does not include the sequence column, Doris will use the value of the 
matching sequence column from the h [...]
 
-3. In cases of concurrent data load, Doris utilizes MVCC (Multi-Version 
Concurrency Control) mechanism to ensure data correctness. If two batches of 
loaded data update different columns of the same key, the load task with a 
higher system version will reapply the data for the same key written by the 
load task with a lower version after the lower version load task succeeds.
\ No newline at end of file
+3. In cases of concurrent data load, Doris utilizes MVCC (Multi-Version 
Concurrency Control) mechanism to ensure data correctness. If two batches of 
loaded data update different columns of the same key, the load task with a 
higher system version will reapply the data for the same key written by the 
load task with a lower version after the lower version load task succeeds.
diff --git 
a/versioned_docs/version-2.1/data-operate/update/update-of-aggregate-model.md 
b/versioned_docs/version-2.1/data-operate/update/update-of-aggregate-model.md
index 373a890ad49..1a7dedcad7b 100644
--- 
a/versioned_docs/version-2.1/data-operate/update/update-of-aggregate-model.md
+++ 
b/versioned_docs/version-2.1/data-operate/update/update-of-aggregate-model.md
@@ -68,8 +68,12 @@ For Stream Load, Broker Load, Routine Load, or INSERT INTO, 
you can directly wri
 
 Using the same example as above, the corresponding Stream Load command would 
be (no additional headers required):
 
-```Plain
-curl  --location-trusted -u root: -H "column_separator:," -H 
"columns:order_id,order_status" -T /tmp/update.csv 
http://127.0.0.1:48037/api/db1/order_tbl/_stream_load
+```shell
+$ cat update.csv
+
+1,To be shipped
+
+$ curl  --location-trusted -u root: -H "column_separator:," -H 
"columns:order_id,order_status" -T /tmp/update.csv 
http://127.0.0.1:8030/api/db1/order_tbl/_stream_load
 ```
 
 The corresponding `INSERT INTO` statement would be (no additional session 
variables required):
diff --git 
a/versioned_docs/version-2.1/data-operate/update/update-of-unique-model.md 
b/versioned_docs/version-2.1/data-operate/update/update-of-unique-model.md
index d5090219e20..ec29714c869 100644
--- a/versioned_docs/version-2.1/data-operate/update/update-of-unique-model.md
+++ b/versioned_docs/version-2.1/data-operate/update/update-of-unique-model.md
@@ -116,7 +116,7 @@ $ cat update.csv
 
 1,To be shipped
 
-$ curl --location-trusted -u root: -H "partial_columns:true" -H 
"column_separator:," -H "columns:order_id,order_status" -T /tmp/update.csv 
http://127.0.0.1:48037/api/db1/order_tbl/_stream_load
+$ curl --location-trusted -u root: -H "partial_columns:true" -H 
"column_separator:," -H "columns:order_id,order_status" -T /tmp/update.csv 
http://127.0.0.1:8030/api/db1/order_tbl/_stream_load
 ```
 
 If you are using `INSERT INTO`, you can update as following methods:
@@ -155,4 +155,4 @@ Suggestions for improving load performance:
 
 Now, all rows in a batch write task (whether it is an load task or `INSERT 
INTO`) can only update the same columns. If you need to update different 
columns, you will need to perform separate batch writes.
 
-In the future, flexible column updates will be supported, allowing users to 
update different columns for each row within the same batch load.
\ No newline at end of file
+In the future, flexible column updates will be supported, allowing users to 
update different columns for each row within the same batch load.
diff --git 
a/versioned_docs/version-3.0/data-operate/update/unique-update-transaction.md 
b/versioned_docs/version-3.0/data-operate/update/unique-update-transaction.md
index 479f1d06e00..a6422a7994c 100644
--- 
a/versioned_docs/version-3.0/data-operate/update/unique-update-transaction.md
+++ 
b/versioned_docs/version-3.0/data-operate/update/unique-update-transaction.md
@@ -201,7 +201,7 @@ PROPERTIES(
 Table structure:
 
 ```sql
-MySQL  desc test_table;
+MySQL > desc test_table;
 +-------------+--------------+------+-------+---------+---------+
 | Field       | Type         | Null | Key   | Default | Extra   |
 +-------------+--------------+------+-------+---------+---------+
@@ -218,12 +218,12 @@ MySQL  desc test_table;
 Load the following data:
 
 ```Plain
-1       2020-02-22      1       2020-02-21      a
-1       2020-02-22      1       2020-02-22      b
-1       2020-02-22      1       2020-03-05      c
-1       2020-02-22      1       2020-02-26      d
-1       2020-02-22      1       2020-02-23      e
-1       2020-02-22      1       2020-02-24      b
+1      2020-02-22      1       2020-02-21      a
+1      2020-02-22      1       2020-02-22      b
+1      2020-02-22      1       2020-03-05      c
+1      2020-02-22      1       2020-02-26      d
+1      2020-02-22      1       2020-02-23      e
+1      2020-02-22      1       2020-02-24      b
 ```
 
 Here is an example using Stream Load:
@@ -235,7 +235,7 @@ curl --location-trusted -u root: -T testData 
http://host:port/api/test/test_tabl
 The result is:
 
 ```sql
-MySQL  select * from test_table;
+MySQL > select * from test_table;
 +---------+------------+----------+-------------+---------+
 | user_id | date       | group_id | modify_date | keyword |
 +---------+------------+----------+-------------+---------+
@@ -250,14 +250,14 @@ In the data load, because the value of the sequence 
column (i.e., modify_date) '
 After completing the above steps, load the following data:
 
 ```Plain
-1       2020-02-22      1       2020-02-22      a
-1       2020-02-22      1       2020-02-23      b
+1      2020-02-22      1       2020-02-22      a
+1      2020-02-22      1       2020-02-23      b
 ```
 
 Query the data:
 
 ```sql
-MySQL [test] select * from test_table;
+MySQL [test] > select * from test_table;
 +---------+------------+----------+-------------+---------+
 | user_id | date       | group_id | modify_date | keyword |
 +---------+------------+----------+-------------+---------+
@@ -270,14 +270,14 @@ In the loaded data, the sequence column (modify_date) of 
all previously loaded d
 **4. Try loading the following data again**
 
 ```Plain
-1       2020-02-22      1       2020-02-22      a
-1       2020-02-22      1       2020-03-23      w
+1      2020-02-22      1       2020-02-22      a
+1      2020-02-22      1       2020-03-23      w
 ```
 
 Query the data:
 
 ```sql
-MySQL [test] select * from test_table;
+MySQL [test] > select * from test_table;
 +---------+------------+----------+-------------+---------+
 | user_id | date       | group_id | modify_date | keyword |
 +---------+------------+----------+-------------+---------+
@@ -297,4 +297,4 @@ Table test_tbl has sequence column, need to specify the 
sequence column
 
 2. Since version 2.0, Doris has supported partial column updates for 
Merge-on-Write implementation of Unique Key tables. In partial column update, 
users can update only a subset of columns each time, so it is not necessary to 
include the sequence column. If the loading task submitted by the user includes 
the sequence column, it has no effect. If the loading task submitted by the 
user does not include the sequence column, Doris will use the value of the 
matching sequence column from the h [...]
 
-3. In cases of concurrent data load, Doris utilizes MVCC (Multi-Version 
Concurrency Control) mechanism to ensure data correctness. If two batches of 
loaded data update different columns of the same key, the load task with a 
higher system version will reapply the data for the same key written by the 
load task with a lower version after the lower version load task succeeds.
\ No newline at end of file
+3. In cases of concurrent data load, Doris utilizes MVCC (Multi-Version 
Concurrency Control) mechanism to ensure data correctness. If two batches of 
loaded data update different columns of the same key, the load task with a 
higher system version will reapply the data for the same key written by the 
load task with a lower version after the lower version load task succeeds.
diff --git 
a/versioned_docs/version-3.0/data-operate/update/update-of-aggregate-model.md 
b/versioned_docs/version-3.0/data-operate/update/update-of-aggregate-model.md
index 945bc013c47..4b5ef906756 100644
--- 
a/versioned_docs/version-3.0/data-operate/update/update-of-aggregate-model.md
+++ 
b/versioned_docs/version-3.0/data-operate/update/update-of-aggregate-model.md
@@ -68,8 +68,12 @@ For Stream Load, Broker Load, Routine Load, or INSERT INTO, 
you can directly wri
 
 Using the same example as above, the corresponding Stream Load command would 
be (no additional headers required):
 
-```Plain
-curl  --location-trusted -u root: -H "column_separator:," -H 
"columns:order_id,order_status" -T /tmp/update.csv 
http://127.0.0.1:48037/api/db1/order_tbl/_stream_load
+```shell
+$ cat update.csv
+
+1,To be shipped
+
+$ curl  --location-trusted -u root: -H "column_separator:," -H 
"columns:order_id,order_status" -T /tmp/update.csv 
http://127.0.0.1:8030/api/db1/order_tbl/_stream_load
 ```
 
 The corresponding `INSERT INTO` statement would be (no additional session 
variables required):
diff --git 
a/versioned_docs/version-3.0/data-operate/update/update-of-unique-model.md 
b/versioned_docs/version-3.0/data-operate/update/update-of-unique-model.md
index d5090219e20..ec29714c869 100644
--- a/versioned_docs/version-3.0/data-operate/update/update-of-unique-model.md
+++ b/versioned_docs/version-3.0/data-operate/update/update-of-unique-model.md
@@ -116,7 +116,7 @@ $ cat update.csv
 
 1,To be shipped
 
-$ curl --location-trusted -u root: -H "partial_columns:true" -H 
"column_separator:," -H "columns:order_id,order_status" -T /tmp/update.csv 
http://127.0.0.1:48037/api/db1/order_tbl/_stream_load
+$ curl --location-trusted -u root: -H "partial_columns:true" -H 
"column_separator:," -H "columns:order_id,order_status" -T /tmp/update.csv 
http://127.0.0.1:8030/api/db1/order_tbl/_stream_load
 ```
 
 If you are using `INSERT INTO`, you can update as following methods:
@@ -155,4 +155,4 @@ Suggestions for improving load performance:
 
 Now, all rows in a batch write task (whether it is an load task or `INSERT 
INTO`) can only update the same columns. If you need to update different 
columns, you will need to perform separate batch writes.
 
-In the future, flexible column updates will be supported, allowing users to 
update different columns for each row within the same batch load.
\ No newline at end of file
+In the future, flexible column updates will be supported, allowing users to 
update different columns for each row within the same batch load.


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org
For additional commands, e-mail: commits-h...@doris.apache.org

Reply via email to