This is an automated email from the ASF dual-hosted git repository.

luzhijing pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/doris-website.git


The following commit(s) were added to refs/heads/master by this push:
     new 97b91a0d18 [doc] Remove contents related to the MySQL Load function in 
1.2 version (#803)
97b91a0d18 is described below

commit 97b91a0d188cd5ee601410244db2dc3c966a170d
Author: lishiqi_amy <amylee9...@163.com>
AuthorDate: Sat Jun 29 18:28:43 2024 +0800

    [doc] Remove contents related to the MySQL Load function in 1.2 version 
(#803)
---
 .../import/import-scenes/local-file-load.md        |  64 +--------
 .../import/import-way/mysql-load-manual.md         | 155 ---------------------
 .../version-1.2/data-operate/import/load-manual.md |  34 +++--
 .../import/import-scenes/local-file-load.md        |  59 +-------
 .../import/import-way/mysql-load-manual.md         | 152 --------------------
 .../version-1.2/data-operate/import/load-manual.md |   1 -
 versioned_sidebars/version-1.2-sidebars.json       |   1 -
 7 files changed, 27 insertions(+), 439 deletions(-)

diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.2/data-operate/import/import-scenes/local-file-load.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.2/data-operate/import/import-scenes/local-file-load.md
index be023fe354..83eaf9f3bf 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.2/data-operate/import/import-scenes/local-file-load.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.2/data-operate/import/import-scenes/local-file-load.md
@@ -28,11 +28,10 @@ under the License.
 # 导入本地数据
 本文档主要介绍如何从客户端导入本地的数据。
 
-目前Doris支持两种从本地导入数据的模式:
-1. [Stream Load](../import-way/stream-load-manual.md)
-2. [MySql Load](../import-way/mysql-load-manual.md)
+目前 Doris 支持从本地导入数据的模式为 [Stream 
Load](../import-way/stream-load-manual.md)。下文介绍了使用 Stream Load 导入本地数据的方法。
+
+## 使用 Stream Load 导入本地数据
 
-## Stream Load
 Stream Load 用于将本地文件导入到 Doris 中。
 
 不同于其他命令的提交方式,Stream Load 是通过 HTTP 协议与 Doris 进行连接交互的。
@@ -44,7 +43,7 @@ Stream Load 用于将本地文件导入到 Doris 中。
 
 本文文档我们以 [curl](https://curl.se/docs/manpage.html) 命令为例演示如何进行数据导入。
 
-文档最后,我们给出一个使用 Java 导入数据的代码示例
+文档最后,我们给出一个使用 Java 导入数据的代码示例。
 
 ### 导入数据
 
@@ -78,14 +77,14 @@ PUT /api/{db}/{table}/_stream_load
    ```
 
    - user:passwd 为在 Doris 中创建的用户。初始用户为 admin / root,密码初始状态下为空。
-   - host:port 为 BE 的 HTTP 协议端口,默认是 8040,可以在 Doris 集群 WEB UI页面查看。
+   - host:port 为 BE 的 HTTP 协议端口,默认是 8040,可以在 Doris 集群 WEB UI 页面查看。
    - label: 可以在 Header 中指定 Label 唯一标识这个导入任务。
 
    关于 Stream Load 命令的更多高级操作,请参阅 [Stream 
Load](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/STREAM-LOAD.md)
 命令文档。
 
 3. 等待导入结果
 
-   Stream Load 命令是同步命令,返回成功即表示导入成功。如果导入数据较大,可能需要较长的等待时间。示例如下:
+   Stream Load 命令是同步命令,返回成功即表示导入成功。如果导入数据较大,可能需要较长的等待时间。示例如下:
 
    ```json
    {
@@ -217,7 +216,7 @@ public class DorisStreamLoader {
 
 
 
->注意:这里 http client 的版本要是4.5.13
+>注意:这里 http client 的版本要是 4.5.13
 > ```xml
 ><dependency>
 >    <groupId>org.apache.httpcomponents</groupId>
@@ -225,52 +224,3 @@ public class DorisStreamLoader {
 >    <version>4.5.13</version>
 ></dependency>
 > ```
-
-## MySql LOAD
-<version since="dev">
-    MySql LOAD样例
-</version>
-
-### 导入数据
-1. 创建一张表
-
-   通过 `CREATE TABLE` 命令在`demo`创建一张表用于存储待导入的数据
-
-   ```sql
-   CREATE TABLE IF NOT EXISTS load_local_file_test
-   (
-   id INT,
-   age TINYINT,
-   name VARCHAR(50)
-   )
-   unique key(id)
-   DISTRIBUTED BY HASH(id) BUCKETS 3;
-   ```
-
-2. 导入数据
-   在MySql客户端下执行以下 SQL 命令导入本地文件:
-
-   ```sql
-   LOAD DATA
-   LOCAL
-   INFILE '/path/to/local/demo.txt'
-   INTO TABLE demo.load_local_file_test
-   ```
-
-   关于 MySQL Load 命令的更多高级操作,请参阅 [MySQL 
Load](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/MYSQL-LOAD.md)
 命令文档。
-
-3. 等待导入结果
-
-   MySql Load 命令是同步命令,返回成功即表示导入成功。如果导入数据较大,可能需要较长的等待时间。示例如下:
-
-   ```text
-   Query OK, 1 row affected (0.17 sec)
-   Records: 1  Deleted: 0  Skipped: 0  Warnings: 0
-   ```
-
-   - 如果出现上述结果, 则表示导入成功。导入失败, 会抛出错误,并在客户端显示错误原因
-   - 其他字段的详细介绍,请参阅 [MySQL 
Load](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/MYSQL-LOAD.md)
 命令文档。
-
-### 导入建议
-- MySql Load 只能导入本地文件(可以是客户端本地或者连接的FE节点本地), 而且支持CSV格式。
-- 建议一个导入请求的数据量控制在 1 - 2 GB 以内。如果有大量本地文件,可以分批并发提交。
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.2/data-operate/import/import-way/mysql-load-manual.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.2/data-operate/import/import-way/mysql-load-manual.md
deleted file mode 100644
index 8c316d599f..0000000000
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.2/data-operate/import/import-way/mysql-load-manual.md
+++ /dev/null
@@ -1,155 +0,0 @@
----
-{
-    "title": "MySql load",
-    "language": "zh-CN"
-}
----
-
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-# MySql load
-<version since="dev">
-
-该语句兼容MySQL标准的[LOAD 
DATA](https://dev.mysql.com/doc/refman/8.0/en/load-data.html)语法,方便用户导入本地数据,并降低学习成本。
-
-MySql load 同步执行导入并返回导入结果。用户可直接通过SQL返回信息判断本次导入是否成功。
-
-MySql load 主要适用于导入客户端本地文件,或通过程序导入数据流中的数据。
-
-</version>
-
-## 基本原理
-
-MySql Load和Stream Load功能相似, 都是导入本地文件到Doris集群中, 因此MySQL 
Load实现复用了StreamLoad的基础导入能力:
-
-1. FE接收到客户端执行的MySQL Load请求, 完成SQL解析工作
-
-2. FE将Load请求拆解,并封装为StreamLoad的请求.
-
-3. FE选择一个BE节点发送StreamLoad请求
-
-4. 发送请求的同时, FE会异步且流式的从MySQL客户端读取本地文件数据, 并实时的发送到StreamLoad的HTTP请求中.
-
-5. MySQL客户端数据传输完毕, FE等待StreamLoad完成, 并展示导入成功或者失败的信息给客户端.
-
-
-## 支持数据格式
-
-MySQL Load 支持数据格式:CSV(文本)。
-
-## 基本操作举例
-
-### 客户端连接
-```bash
-mysql --local-infile  -h 127.0.0.1 -P 9030 -u root -D testdb
-```
-
-注意: 执行MySQL Load语句的时候, 客户端命令必须带有`--local-infile`, 否则执行可能会出现错误. 
如果是通过JDBC方式连接的话, 需要在URL中需要加入配置`allowLoadLocalInfile=true`
-
-
-### 创建测试表
-```sql
-CREATE TABLE testdb.t1 (pk INT, v1 INT SUM) AGGREGATE KEY (pk) DISTRIBUTED BY 
hash (pk) PROPERTIES ('replication_num' = '1');
-```
-
-### 导入客户端文件
-假设在客户端本地当前路径上有一个CSV文件, 名为`client_local.csv`, 使用MySQL 
LOAD语法将表导入到测试表`testdb.t1`中.
-
-```sql
-LOAD DATA LOCAL
-INFILE 'client_local.csv'
-INTO TABLE testdb.t1
-PARTITION (partition_a, partition_b, partition_c, partition_d)
-COLUMNS TERMINATED BY '\t'
-LINES TERMINATED BY '\n'
-IGNORE 1 LINES
-(k1, k2, v2, v10, v11)
-set (c1=k1,c2=k2,c3=v10,c4=v11)
-PROPERTIES ("strict_mode"="true")
-```
-1. MySQL Load以语法`LOAD DATA`开头, 指定`LOCAL`表示读取客户端文件.
-2. `INFILE`内填写本地文件路径, 可以是相对路径, 也可以是绝对路径.目前只支持单个文件, 不支持多个文件
-3. `INTO TABLE`的表名可以指定数据库名, 如案例所示. 也可以省略, 则会使用当前用户所在的数据库.
-4. `PARTITION`语法支持指定分区导入
-5. `COLUMNS TERMINATED BY`指定列分隔符
-6. `LINES TERMINATED BY`指定行分隔符
-7. `IGNORE num LINES`用户跳过CSV的num表头.
-8. 列映射语法, 具体参数详见[导入的数据转换](../import-scenes/load-data-convert.md) 的列映射章节
-9. `PROPERTIES`导入参数, 具体参数详见[MySQL 
Load](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/MYSQL-LOAD.md)
 命令手册。
-
-### 导入服务端文件
-假设在FE节点上的`/root/server_local.csv`路径为一个CSV文件, 使用MySQL客户端连接对应的FE节点, 
然后执行一下命令将数据导入到测试表中.
-
-```sql
-LOAD DATA
-INFILE '/root/server_local.csv'
-INTO TABLE testdb.t1
-PARTITION (partition_a, partition_b, partition_c, partition_d)
-COLUMNS TERMINATED BY '\t'
-LINES TERMINATED BY '\n'
-IGNORE 1 LINES
-(k1, k2, v2, v10, v11)
-set (c1=k1,c2=k2,c3=v10,c4=v11)
-PROPERTIES ("strict_mode"="true")
-```
-1. 导入服务端本地文件的语法和导入客户端语法的唯一区别是`LOAD DATA`关键词后面是否加入`LOCAL`关键字.
-2. FE为多节点部署, 导入服务端文件功能只能够导入客户端连接的FE节点, 无法导入其他FE节点本地的文件.
-3. 服务端导入默认是关闭, 通过设置FE的配置`mysql_load_server_secure_path`开启, 导入文件的必须在该目录下.
-
-### 返回结果
-
-由于 MySQL load 是一种同步的导入方式,所以导入的结果会通过SQL语法返回给用户。
-如果导入执行失败, 会展示具体的报错信息. 如果导入成功, 则会显示导入的行数.
-
-```text
-Query OK, 1 row affected (0.17 sec)
-Records: 1  Deleted: 0  Skipped: 0  Warnings: 0
-```
-
-### 异常结果
-如果执行出现异常, 会在客户端中出现如下异常显示
-```text
-ERROR 1105 (HY000): errCode = 2, detailMessage = [INTERNAL_ERROR]too many 
filtered rows with load id b612907c-ccf4-4ac2-82fe-107ece655f0f
-```
-
-当遇到这类异常错误, 可以找到其中的`loadId`, 可以通过`show load warnings`命令在客户端中展示详细的异常信息.
-```sql
-show load warnings where label='b612907c-ccf4-4ac2-82fe-107ece655f0f';
-```
-
-异常信息中的LoadId即为Warning命令中的label字段.
-
-
-### 配置项
-1. `mysql_load_thread_pool`控制单个FE中MySQL Load并发执行线程个数, 默认为4. 
线程池的排队队列大小为`mysql_load_thread_pool`的5倍, 因此默认情况下, 可以并发提交的任务为 4 + 4\*5 = 24个. 
如果并发个数超过24时, 可以调大该配置项.
-2. `mysql_load_server_secure_path`服务端导入的安全路径, 默认为空, 即不允许服务端导入. 如需开启这个功能, 
建议在`DORIS_HOME`目录下创建一个`local_import_data`目录, 用于导入数据.
-3. `mysql_load_in_memory_record`失败的任务记录个数, 该记录会保留在内存中, 默认只会保留最近的20. 
如果有需要可以调大该配置. 在内存中的记录, 有效期为1天, 异步清理线程会固定一天清理一次过期数据.
-
-
-## 注意事项
-
-1. 如果客户端出现`LOAD DATA LOCAL INFILE file request rejected due to restrictions on 
access`错误, 需要用`mysql  --local-infile=1`命令来打开客户端的导入功能.
-
-2. MySQL Load的导入会受到StreamLoad的配置项限制, 
例如BE支持的StreamLoad最大文件量受`streaming_load_max_mb`控制, 默认为10GB.
-
-## 更多帮助
-
-1. 关于 MySQL Load 使用的更多详细语法及最佳实践,请参阅 [MySQL 
Load](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/MYSQL-LOAD.md)
 命令手册。
-
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.2/data-operate/import/load-manual.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.2/data-operate/import/load-manual.md
index 077044c980..0f1d36f838 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.2/data-operate/import/load-manual.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.2/data-operate/import/load-manual.md
@@ -34,12 +34,12 @@ Doris 提供多种数据导入方案,可以针对不同的数据源进行选
 
 | 数据源                               | 导入方式                                     
                |
 
|-----------------------------------|----------------------------------------------------------|
-| 对象存储(s3),HDFS                     | 
[使用Broker导入数据](./import-scenes/external-storage-load.md) |
+| 对象存储(S3),HDFS                     | [使用 Broker 
导入数据](./import-scenes/external-storage-load.md) |
 | 本地文件                              | 
[导入本地数据](./import-scenes/local-file-load.md)             |
-| Kafka                             | 
[订阅Kafka数据](./import-scenes/kafka-load.md)               |
-| Mysql、PostgreSQL,Oracle,SQLServer | 
[通过外部表同步数据](./import-scenes/external-table-load.md)      |
-| 通过JDBC导入                          | 
[使用JDBC同步数据](./import-scenes/jdbc-load.md)               |
-| 导入JSON格式数据                        | 
[JSON格式数据导入](./import-way/load-json-format.md)           |
+| Kafka                             | [订阅 Kafka 
数据](./import-scenes/kafka-load.md)               |
+| Mysql,PostgreSQL,Oracle,SQLServer | 
[通过外部表同步数据](./import-scenes/external-table-load.md)      |
+| 通过 JDBC 导入                          | [使用 JDBC 
同步数据](./import-scenes/jdbc-load.md)               |
+| 导入 JSON 格式数据                        | [JSON 
格式数据导入](./import-way/load-json-format.md)           |
 | MySQL Binlog                      | [Binlog 
Load](./import-way/binlog-load-manual.md)        |
 | AutoMQ                            | [AutoMQ 
Load](../../ecosystem/automq-load.md)            |
 
@@ -47,14 +47,13 @@ Doris 提供多种数据导入方案,可以针对不同的数据源进行选
 
 | 导入方式名称 | 使用方式                                                     |
 | ------------ | ------------------------------------------------------------ |
-| Spark Load   | [通过Spark导入外部数据](./import-way/spark-load-manual.md) |
-| Broker Load  | [通过Broker导入外部存储数据](./import-way/broker-load-manual.md) |
-| Stream Load  | [流式导入数据(本地文件及内存数据)](./import-way/stream-load-manual.md) |
-| Routine Load | [导入Kafka数据](./import-way/routine-load-manual.md)       |
-| Binlog Load  | [采集Mysql Binlog 导入数据](./import-way/binlog-load-manual.md) |
-| Insert Into  | [外部表通过INSERT方式导入数据](./import-way/insert-into-manual.md) |
-| S3 Load      | [S3协议的对象存储数据导入](./import-way/s3-load-manual.md) |
-| MySQL Load   | [MySQL客户端导入本地数据](./import-way/mysql-load-manual.md) |
+| Spark Load   | [通过 Spark 导入外部数据](./import-way/spark-load-manual.md) |
+| Broker Load  | [通过 Broker 导入外部存储数据](./import-way/broker-load-manual.md) |
+| Stream Load  | [流式导入数据 (本地文件及内存数据)](./import-way/stream-load-manual.md) |
+| Routine Load | [导入 Kafka 数据](./import-way/routine-load-manual.md)       |
+| Binlog Load  | [采集 MySQL Binlog 导入数据](./import-way/binlog-load-manual.md) |
+| Insert Into  | [外部表通过 INSERT 方式导入数据](./import-way/insert-into-manual.md) |
+| S3 Load      | [S3 协议的对象存储数据导入](./import-way/s3-load-manual.md) |
 
 ## 支持的数据格式
 
@@ -85,13 +84,13 @@ Label 是用于保证对应的导入作业,仅能成功导入一次。一个
 
 
导入方式分为同步和异步。对于同步导入方式,返回结果即表示导入成功还是失败。而对于异步导入方式,返回成功仅代表作业提交成功,不代表数据导入成功,需要使用对应的命令查看导入作业的运行状态。
 
-## 导入array类型
+## 导入 Array 类型
 
-向量化场景才能支持array函数,非向量化场景不支持。
+向量化场景才能支持 Array 函数,非向量化场景不支持。
 
-如果想要应用array函数导入数据,则应先启用向量化功能;然后需要根据array函数的参数类型将输入参数列转换为array类型;最后,就可以继续使用array函数了。
+如果想要应用 Array 函数导入数据,则应先启用向量化功能;然后需要根据 Array 函数的参数类型将输入参数列转换为 Array 
类型;最后,就可以继续使用 Array 函数了。
 
-例如以下导入,需要先将列b14和列a13先cast成`array<string>`类型,再运用`array_union`函数。
+例如以下导入,需要先将列 b14 和列 a13 先 cast 成`array<string>`类型,再运用`array_union`函数。
 
 ```sql
 LOAD LABEL label_03_14_49_34_898986_19090452100 ( 
@@ -102,4 +101,3 @@ LOAD LABEL label_03_14_49_34_898986_19090452100 (
   WITH BROKER "hdfs" ("username"="test_array", "password"="") 
   PROPERTIES( "max_filter_ratio"="0.8" );
 ```
-
diff --git 
a/versioned_docs/version-1.2/data-operate/import/import-scenes/local-file-load.md
 
b/versioned_docs/version-1.2/data-operate/import/import-scenes/local-file-load.md
index 24fef12ee1..ad47543705 100644
--- 
a/versioned_docs/version-1.2/data-operate/import/import-scenes/local-file-load.md
+++ 
b/versioned_docs/version-1.2/data-operate/import/import-scenes/local-file-load.md
@@ -27,11 +27,10 @@ under the License.
 # Import local data
 The following mainly introduces how to import local data in client.
 
-Now Doris support two way to load data from client local file:
-1. [Stream Load](../import-way/stream-load-manual.md)
-2. [MySql Load](../import-way/mysql-load-manual.md)
+Now Doris support using [Stream Load](../import-way/stream-load-manual.md) to 
load data from client local file. Read the following section to learn about how 
to use Stream Load.
+2
 
-## Stream Load
+## Use Stream Load to import local data
 
 Stream Load is used to import local files into Doris.
 
@@ -222,54 +221,4 @@ public class DorisStreamLoader {
 > <artifactId>httpclient</artifactId>
 > <version>4.5.13</version>
 > </dependency>
-> ```
-
-## MySql LOAD
-<version since="dev">
-    Example of mysql load
-</version>
-
-### Import Data
-1. Create a table
-
-   Use the `CREATE TABLE` command to create a table in the `demo` database to 
store the data to be imported.
-
-   ```sql
-   CREATE TABLE IF NOT EXISTS load_local_file_test
-   (
-   id INT,
-   age TINYINT,
-   name VARCHAR(50)
-   )
-   unique key(id)
-   DISTRIBUTED BY HASH(id) BUCKETS 3;
-   ````
-
-2. Import data
-   Excute fellowing sql statmeent in the mysql client to load client local 
file:
-
-   ```sql
-   LOAD DATA
-   LOCAL
-   INFILE '/path/to/local/demo.txt'
-   INTO TABLE demo.load_local_file_test
-   ```
-
-   For more advanced operations of the MySQL Load command, see [MySQL 
Load](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/MYSQL-LOAD.md)
 Command documentation.
-
-3. Wait for the import result
-
-   The MySql Load command is a synchronous command, and a successful return 
indicates that the import is successful. If the imported data is large, a 
longer waiting time may be required. Examples are as follows:
-
-   ```text
-   Query OK, 1 row affected (0.17 sec)
-   Records: 1  Deleted: 0  Skipped: 0  Warnings: 0
-   ```
-
-   - Load success if the client show the return rows. Otherwise sql statement 
will throw an exception and show the error message in client.
-   - For details of other fields, please refer to the [MySQL 
Load](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/MYSQL-LOAD.md)
 command documentation.
-
-### Import suggestion
-
-   - MySql Load can only import local files(which can be client local file or 
fe local file) and only support csv format.
-   - It is recommended to limit the amount of data for an import request to 1 
- 2 GB. If you have a large number of local files, you can submit them 
concurrently in batches.
+> ```
\ No newline at end of file
diff --git 
a/versioned_docs/version-1.2/data-operate/import/import-way/mysql-load-manual.md
 
b/versioned_docs/version-1.2/data-operate/import/import-way/mysql-load-manual.md
deleted file mode 100644
index d9ca89a9b8..0000000000
--- 
a/versioned_docs/version-1.2/data-operate/import/import-way/mysql-load-manual.md
+++ /dev/null
@@ -1,152 +0,0 @@
----
-{
-    "title": "MySql load",
-    "language": "en"
-}
----
-
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-# Mysql load
-<version since="dev">
-
-This is an stand syntax of MySql [LOAD 
DATA](https://dev.mysql.com/doc/refman/8.0/en/load-data.html) for user to load 
local file.
-
-MySql load synchronously executes the import and returns the import result. 
The return information will show whether the import is successful for user.
-
-MySql load is mainly suitable for importing local files on the client side, or 
importing data from a data stream through a program.
-
-</version>
-
-## Basic Principles
-
-The MySql Load are similar with Stream Load. Both import local files into the 
Doris cluster, so the MySQL Load will reuses StreamLoad:
-
- 1. FE receives the MySQL Load request executed by the client and then analyse 
the SQL
-
- 2. FE build the MySql Load request as a StreamLoad request.
-
- 3. FE selects a BE node to send a StreamLoad request
-
- 4. When sending the request, FE will read the local file data from the MySQL 
client side streamingly, and send it to the HTTP request of StreamLoad 
asynchronously.
-
- 5. After the data transfer on the MySQL client side is completed, FE waits 
for the StreamLoad to complete, and displays the import success or failure 
information to the client side.
-
-
-## Support data format
-
-MySql Load currently only supports data formats: CSV (text).
-
-## Basic operations
-
-### client connection
-```bash
-mysql --local-infile  -h 127.0.0.1 -P 9030 -u root -D testdb
-```
-
-Notice that if you wants to use mysql load, you must connect doris server with 
`--local-infile` in client command.
-If you're use jdbc to connect doris, you must add property named 
`allowLoadLocalInfile=true` in jdbc url.
-
-
-### Create test table
-```sql
- CREATE TABLE testdb.t1 (pk INT, v1 INT SUM) AGGREGATE KEY (pk) DISTRIBUTED BY 
hash (pk) PROPERTIES ('replication_num' = '1');
- ```
- ### import file from client node
- Suppose there is a CSV file named 'client_local.csv 'on the current path of 
the client side, which will be imported into the test table'testdb.t1' using 
the MySQL LOAD syntax.
-
-```sql
-LOAD DATA LOCAL
-INFILE 'client_local.csv '
-INTO TABLE testdb.t1
-PARTITION (partition_a, partition_b, partition_c, partition_d)
-COLUMNS TERMINATED BY '\ t'
-LINES TERMINATED BY '\ n'
-IGNORE 1 LINES
-(K1, k2, v2, v10, v11)
-SET (c1 = k1, c2 = k2, c3 = v10, c4 = v11)
-PROPERTIES ("strict_mode" = "true")
-```
-1. MySQL Load starts with the syntax `LOAD DATA`, and specifying `LOCAL` means 
reading client side files.
-2. The local fill path will be filled after `INFILE`, which can be a relative 
path or an absolute path. Currently only a single file is supported, and 
multiple files are not supported
-3. The table name after `INTO TABLE` can specify the database name, as shown 
in the case. It can also be omitted, and the current database for the user will 
be used.
-4. `PARTITION` syntax supports specified partition to import
-5. `COLUMNS TERMINATED BY` specifies the column separator
-6. `LINES TERMINATED BY` specifies the line separator
-7. `IGNORE num LINES` skips the num header of the CSV.
-8. Column mapping syntax, see the column mapping chapter of [Imported Data 
Transformation](../import-scenes/load-data-convert.md) for specific parameters
-9. `PROPERTIES` is the configuration of import, please refer to the [MySQL 
Load](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/MYSQL-LOAD.md)
 command manual for specific properties.
-
-### import file from fe server node
-Assuming that the '/root/server_local.csv' path on the FE node is a CSV file, 
use the MySQL client side to connect to the corresponding FE node, and then 
execute the following command to import data into the test table.
-
-```sql
-LOAD DATA
-INFILE '/root/server_local.csv'
-INTO TABLE testdb.t1
-PARTITION (partition_a, partition_b, partition_c, partition_d)
-COLUMNS TERMINATED BY '\ t'
-LINES TERMINATED BY '\ n'
-IGNORE 1 LINES
-(K1, k2, v2, v10, v11)
-SET (c1 = k1, c2 = k2, c3 = v10, c4 = v11)
-PROPERTIES ("strict_mode" = "true")
-```
-1. The only difference between the syntax of importing server level local 
files and importing client side syntax is whether the'LOCAL 'keyword is added 
after the'LOAD DATA' keyword.
-2. FE will have multi-nodes, and importing server level files can only import 
FE nodes connected by the client side, and cannot import files local to other 
FE nodes.
-3. Server side load was disabled by default. Enable it by setting 
`mysql_load_server_secure_path` with a secure path. All the load file should be 
under this path.
-
-### Return result
-Since MySQL load is a synchronous import method, the imported results are 
returned to the user through SQL syntax.
-If the import fails, a specific error message will be displayed. If the import 
is successful, the number of imported rows will be displayed.
-
-```Text
-Query OK, 1 row affected (0.17 sec)
-Records: 1 Deleted: 0 Skipped: 0 Warnings: 0
-```
-
-### Error result
-If mysql load process goes wrong, it will show the error in the client as 
below:
-```text
-ERROR 1105 (HY000): errCode = 2, detailMessage = [INTERNAL_ERROR]too many 
filtered rows with load id b612907c-ccf4-4ac2-82fe-107ece655f0f
-```
-
-If you meets this error, you can extract the `loadId` and use it in the `show 
load warnings` command to get more detail message.
-```sql
-show load warnings where label='b612907c-ccf4-4ac2-82fe-107ece655f0f';
-```
-
-The loadId was the label in this case.
-
-
-### Configuration
-1. `mysql_load_thread_pool`: the thread pool size for singe FE node, set 4 
thread by default. The block queue size is 5 times of `mysql_load_thread_pool`. 
So FE can accept 4 + 4\*5 = 24 requests in one time. Increase this 
configuration if the parallelism are larger than 24.
-2. `mysql_load_server_secure_path`: the secure path for load data from server. 
Empty path by default means that it's not allowed for server load. Recommend to 
create a `local_import_data` directory under `DORIS_HOME` to load data if you 
want enable it.
-3. `mysql_load_in_memory_record` The failed mysql load record size. The record 
was keep in memory and only have 20 records by default. If you want to track 
more records,  you can rise the config but be careful about the fe memory. This 
record will expired after one day and there is a async thread to clean it in 
every day.
-
-
-## Notice 
-
-1. If you see this `LOAD DATA LOCAL INFILE file request rejected due to 
restrictions on access` message, you should connet mysql with `mysql  
--local-infile=1` command to enable client to load local file.
-2. The configuration for StreamLoad will also affect MySQL Load. Such as the 
configurate in be named `streaming_load_max_mb`, it's 10GB by default and it 
will control the max size for one load.
-
-## More Help
-
-1. For more detailed syntax and best practices for using MySQL Load, see the 
[MySQL 
Load](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/MYSQL-LOAD.md)
 command manual.
diff --git a/versioned_docs/version-1.2/data-operate/import/load-manual.md 
b/versioned_docs/version-1.2/data-operate/import/load-manual.md
index 922870dbb2..a1f5bcdabc 100644
--- a/versioned_docs/version-1.2/data-operate/import/load-manual.md
+++ b/versioned_docs/version-1.2/data-operate/import/load-manual.md
@@ -53,7 +53,6 @@ Doris provides a variety of data import solutions, and you 
can choose different
 | Binlog Load        | [collect Mysql Binlog import 
data](./import-way/binlog-load-manual.md) |
 | Insert Into        | [External table imports data through 
INSERT](./import-way/insert-into-manual.md) |
 | S3 Load            | [Object storage data import of S3 
protocol](./import-way/s3-load-manual.md) |
-| MySql Load         | [Local data import of MySql 
protocol](./import-way/mysql-load-manual.md) |
 
 ## Supported data formats
 
diff --git a/versioned_sidebars/version-1.2-sidebars.json 
b/versioned_sidebars/version-1.2-sidebars.json
index bbd6a6faa6..d991c6e00f 100644
--- a/versioned_sidebars/version-1.2-sidebars.json
+++ b/versioned_sidebars/version-1.2-sidebars.json
@@ -93,7 +93,6 @@
                                 
"data-operate/import/import-way/routine-load-manual",
                                 
"data-operate/import/import-way/spark-load-manual",
                                 
"data-operate/import/import-way/stream-load-manual",
-                                
"data-operate/import/import-way/mysql-load-manual",
                                 
"data-operate/import/import-way/s3-load-manual",
                                 
"data-operate/import/import-way/insert-into-manual",
                                 
"data-operate/import/import-way/load-json-format"


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org
For additional commands, e-mail: commits-h...@doris.apache.org

Reply via email to