This is an automated email from the ASF dual-hosted git repository.

jiafengzheng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/doris.git


The following commit(s) were added to refs/heads/master by this push:
     new 25a6be850d [doc](fix)Export doc fix (#11584)
25a6be850d is described below

commit 25a6be850d71425c95e72d37ff91619e4a44746f
Author: jiafeng.zhang <zhang...@gmail.com>
AuthorDate: Mon Aug 8 19:27:57 2022 +0800

    [doc](fix)Export doc fix (#11584)
    
    export doc fix
---
 docs/en/docs/data-operate/export/export-manual.md  | 38 +++++++++++--
 .../Backup-and-Restore/BACKUP.md                   | 59 ++++++++++++++++++++
 .../docs/data-operate/export/export-manual.md      | 26 ++++++++-
 .../Backup-and-Restore/BACKUP.md                   | 65 +++++++++++++++++++++-
 4 files changed, 179 insertions(+), 9 deletions(-)

diff --git a/docs/en/docs/data-operate/export/export-manual.md 
b/docs/en/docs/data-operate/export/export-manual.md
index 221f2e1eeb..c38208963e 100644
--- a/docs/en/docs/data-operate/export/export-manual.md
+++ b/docs/en/docs/data-operate/export/export-manual.md
@@ -26,7 +26,7 @@ under the License.
 
 # Data export
 
-Export is a function provided by Doris to export data. This function can 
export user-specified table or partition data in text format to remote storage 
through Broker process, such as HDFS/BOS.
+Export is a function provided by Doris to export data. This function can 
export user-specified table or partition data in text format to remote storage 
through Broker process, such as HDFS / Object storage (supports S3 protocol) 
etc.
 
 This document mainly introduces the basic principles, usage, best practices 
and precautions of Export.
 
@@ -106,14 +106,16 @@ For detailed usage of Export, please refer to [SHOW 
EXPORT](../../sql-manual/sql
 
 Export's detailed commands can be passed through `HELP EXPORT;` Examples are 
as follows:
 
+### Export to hdfs
+
 ```sql
 EXPORT TABLE db1.tbl1 
 PARTITION (p1,p2)
 [WHERE [expr]]
-TO "bos://bj-test-cmy/export/" 
+TO "hdfs://host/path/to/export/" 
 PROPERTIES
 (
-    "label"="mylabel",
+    "label" = "mylabel",
     "column_separator"=",",
     "columns" = "col1,col2",
     "exec_mem_limit"="2147483648",
@@ -121,8 +123,8 @@ PROPERTIES
 )
 WITH BROKER "hdfs"
 (
-       "username" = "user",
-       "password" = "passwd"
+    "username" = "user",
+    "password" = "passwd"
 );
 ```
 
@@ -134,6 +136,29 @@ WITH BROKER "hdfs"
 * `timeout`: homework timeout. Default 2 hours. Unit seconds.
 * `tablet_num_per_task`: The maximum number of fragments allocated per query 
plan. The default is 5.
 
+### Export to object storage (supports S3 protocol)
+
+Create a repository named s3_repo to link cloud storage directly without going 
through the broker.
+
+```sql
+CREATE REPOSITORY `s3_repo`
+WITH S3
+ON LOCATION "s3://s3-repo"
+PROPERTIES
+(
+    "AWS_ENDPOINT" = "http://s3-REGION.amazonaws.com";,
+    "AWS_ACCESS_KEY" = "AWS_ACCESS_KEY",
+    "AWS_SECRET_KEY"="AWS_SECRET_KEY",
+    "AWS_REGION" = "REGION"
+);
+```
+
+- `AWS_ACCESS_KEY`/`AWS_SECRET_KEY`:Is your key to access the OSS API.
+- `AWS_ENDPOINT`:Endpoint indicates the access domain name of OSS external 
services.
+- `AWS_REGION`:Region indicates the region where the OSS data center is 
located.
+
+### View export status
+
 After submitting a job, the job status can be imported by querying the   [SHOW 
EXPORT](../../sql-manual/sql-reference/Show-Statements/SHOW-EXPORT.md)  
command. The results are as follows:
 
 ```sql
@@ -143,7 +168,7 @@ mysql> show EXPORT\G;
      State: FINISHED
   Progress: 100%
   TaskInfo: {"partitions":["*"],"exec mem limit":2147483648,"column 
separator":",","line delimiter":"\n","tablet num":1,"broker":"hdfs","coord 
num":1,"db":"default_cluster:db1","tbl":"tbl3"}
-      Path: bos://bj-test-cmy/export/
+      Path: hdfs://host/path/to/export/
 CreateTime: 2019-06-25 17:08:24
  StartTime: 2019-06-25 17:08:28
 FinishTime: 2019-06-25 17:08:34
@@ -208,3 +233,4 @@ Usually, a query plan for an Export job has only two parts 
`scan`- `export`, and
 ## More Help
 
 For more detailed syntax and best practices used by Export, please refer to 
the [Export](../../sql-manual/sql-reference/Show-Statements/SHOW-EXPORT.md) 
command manual, you can also You can enter `HELP EXPORT` at the command line of 
the MySql client for more help.
+
diff --git 
a/docs/en/docs/sql-manual/sql-reference/Data-Definition-Statements/Backup-and-Restore/BACKUP.md
 
b/docs/en/docs/sql-manual/sql-reference/Data-Definition-Statements/Backup-and-Restore/BACKUP.md
index d779cb64e8..b458dd6e9d 100644
--- 
a/docs/en/docs/sql-manual/sql-reference/Data-Definition-Statements/Backup-and-Restore/BACKUP.md
+++ 
b/docs/en/docs/sql-manual/sql-reference/Data-Definition-Statements/Backup-and-Restore/BACKUP.md
@@ -34,6 +34,8 @@ BACKUP
 
 This statement is used to back up the data under the specified database. This 
command is an asynchronous operation. After the submission is successful, you 
need to check the progress through the SHOW BACKUP command. Only backing up 
tables of type OLAP is supported.
 
+ Only root or superuser users can create repositories.
+
 grammar:
 
 ```sql
@@ -86,6 +88,63 @@ TO example_repo
 EXCLUDE (example_tbl);
 ```
 
+4. Create a warehouse named hdfs_repo, rely on Baidu hdfs broker 
"hdfs_broker", the data root directory is: 
hdfs://hadoop-name-node:54310/path/to/repo/
+
+```
+CREATE REPOSITORY `hdfs_repo`
+WITH BROKER `hdfs_broker`
+ON LOCATION "hdfs://hadoop-name-node:54310/path/to/repo/"
+PROPERTIES
+(
+    "username" = "user",
+    "password" = "password"
+);
+```
+
+5. Create a repository named s3_repo to link cloud storage directly without 
going through the broker.
+
+```
+CREATE REPOSITORY `s3_repo`
+WITH S3
+ON LOCATION "s3://s3-repo"
+PROPERTIES
+(
+    "AWS_ENDPOINT" = "http://s3-REGION.amazonaws.com";,
+    "AWS_ACCESS_KEY" = "AWS_ACCESS_KEY",
+    "AWS_SECRET_KEY"="AWS_SECRET_KEY",
+    "AWS_REGION" = "REGION"
+);
+```
+
+6. Create a repository named hdfs_repo to link HDFS directly without going 
through the broker.
+
+```
+CREATE REPOSITORY `hdfs_repo`
+WITH hdfs
+ON LOCATION "hdfs://hadoop-name-node:54310/path/to/repo/"
+PROPERTIES
+(
+    "fs.defaultFS"="hdfs://hadoop-name-node:54310",
+    "hadoop.username"="user"
+);
+```
+
+7. Create a repository named minio_repo to link minio storage directly through 
the s3 protocol.
+
+```
+CREATE REPOSITORY `minio_repo`
+WITH S3
+ON LOCATION "s3://minio_repo"
+PROPERTIES
+(
+    "AWS_ENDPOINT" = "http://minio.com";,
+    "AWS_ACCESS_KEY" = "MINIO_USER",
+    "AWS_SECRET_KEY"="MINIO_PASSWORD",
+    "AWS_REGION" = "REGION",
+    "use_path_style" = "true"
+);
+```
+
 ### Keywords
 
 ```text
diff --git a/docs/zh-CN/docs/data-operate/export/export-manual.md 
b/docs/zh-CN/docs/data-operate/export/export-manual.md
index 267061abd8..5ef1bfc974 100644
--- a/docs/zh-CN/docs/data-operate/export/export-manual.md
+++ b/docs/zh-CN/docs/data-operate/export/export-manual.md
@@ -26,7 +26,7 @@ under the License.
 
 # 数据导出
 
-数据导出(Export)是 Doris 提供的一种将数据导出的功能。该功能可以将用户指定的表或分区的数据,以文本的格式,通过 Broker 
进程导出到远端存储上,如 HDFS/BOS 等。
+数据导出(Export)是 Doris 提供的一种将数据导出的功能。该功能可以将用户指定的表或分区的数据,以文本的格式,通过 Broker 
进程导出到远端存储上,如 HDFS / 对象存储(支持S3协议) 等。
 
 本文档主要介绍 Export 的基本原理、使用方式、最佳实践以及注意事项。
 
@@ -126,6 +126,28 @@ WITH BROKER "hdfs"
 * `timeout`:作业超时时间。默认 2小时。单位秒。
 * `tablet_num_per_task`:每个查询计划分配的最大分片数。默认为 5。
 
+### 导出到对象存储
+
+创建名为 s3_repo 的仓库,直接链接云存储,而不通过broker.
+
+```sql
+CREATE REPOSITORY `s3_repo`
+WITH S3
+ON LOCATION "s3://s3-repo"
+PROPERTIES
+(
+    "AWS_ENDPOINT" = "http://s3-REGION.amazonaws.com";,
+    "AWS_ACCESS_KEY" = "AWS_ACCESS_KEY",
+    "AWS_SECRET_KEY"="AWS_SECRET_KEY",
+    "AWS_REGION" = "REGION"
+);
+```
+
+- `AWS_ACCESS_KEY`/`AWS_SECRET_KEY`:是您访问OSS API 的密钥.
+- `AWS_ENDPOINT`:表示OSS的数据中心所在的地域.
+- `AWS_REGION`:Endpoint表示OSS对外服务的访问域名.
+
+
 ### 查看导出状态
 
 提交作业后,可以通过  [SHOW 
EXPORT](../../sql-manual/sql-reference/Show-Statements/SHOW-EXPORT.md) 
命令查询导入作业状态。结果举例如下:
@@ -137,7 +159,7 @@ mysql> show EXPORT\G;
      State: FINISHED
   Progress: 100%
   TaskInfo: {"partitions":["*"],"exec mem limit":2147483648,"column 
separator":",","line delimiter":"\n","tablet num":1,"broker":"hdfs","coord 
num":1,"db":"default_cluster:db1","tbl":"tbl3"}
-      Path: bos://bj-test-cmy/export/
+      Path: hdfs://host/path/to/export/
 CreateTime: 2019-06-25 17:08:24
  StartTime: 2019-06-25 17:08:28
 FinishTime: 2019-06-25 17:08:34
diff --git 
a/docs/zh-CN/docs/sql-manual/sql-reference/Data-Definition-Statements/Backup-and-Restore/BACKUP.md
 
b/docs/zh-CN/docs/sql-manual/sql-reference/Data-Definition-Statements/Backup-and-Restore/BACKUP.md
index e5b1778713..de766279dd 100644
--- 
a/docs/zh-CN/docs/sql-manual/sql-reference/Data-Definition-Statements/Backup-and-Restore/BACKUP.md
+++ 
b/docs/zh-CN/docs/sql-manual/sql-reference/Data-Definition-Statements/Backup-and-Restore/BACKUP.md
@@ -32,7 +32,11 @@ BACKUP
 
 ### Description
 
-该语句用于备份指定数据库下的数据。该命令为异步操作。提交成功后,需通过 SHOW BACKUP 命令查看进度。仅支持备份 OLAP 类型的表。
+该语句用于备份指定数据库下的数据。该命令为异步操作。
+
+仅 root 或 superuser 用户可以创建仓库。
+
+提交成功后,需通过 SHOW BACKUP 命令查看进度。仅支持备份 OLAP 类型的表。
 
 语法:
 
@@ -86,6 +90,65 @@ TO example_repo
 EXCLUDE (example_tbl);
 ```
 
+4. 创建名为 hdfs_repo 的仓库,依赖 Baidu hdfs broker 
"hdfs_broker",数据根目录为:hdfs://hadoop-name-node:54310/path/to/repo/
+
+```
+CREATE REPOSITORY `hdfs_repo`
+WITH BROKER `hdfs_broker`
+ON LOCATION "hdfs://hadoop-name-node:54310/path/to/repo/"
+PROPERTIES
+(
+    "username" = "user",
+    "password" = "password"
+);
+```
+
+5. 创建名为 s3_repo 的仓库,直接链接云存储,而不通过broker.
+
+```
+CREATE REPOSITORY `s3_repo`
+WITH S3
+ON LOCATION "s3://s3-repo"
+PROPERTIES
+(
+    "AWS_ENDPOINT" = "http://s3-REGION.amazonaws.com";,
+    "AWS_ACCESS_KEY" = "AWS_ACCESS_KEY",
+    "AWS_SECRET_KEY"="AWS_SECRET_KEY",
+    "AWS_REGION" = "REGION"
+);
+```
+
+6. 创建名为 hdfs_repo 的仓库,直接链接HDFS,而不通过broker.
+
+```
+CREATE REPOSITORY `hdfs_repo`
+WITH hdfs
+ON LOCATION "hdfs://hadoop-name-node:54310/path/to/repo/"
+PROPERTIES
+(
+    "fs.defaultFS"="hdfs://hadoop-name-node:54310",
+    "hadoop.username"="user"
+);
+```
+
+7. 创建名为 minio_repo 的仓库,直接通过 s3 协议链接 minio.
+
+```
+CREATE REPOSITORY `minio_repo`
+WITH S3
+ON LOCATION "s3://minio_repo"
+PROPERTIES
+(
+    "AWS_ENDPOINT" = "http://minio.com";,
+    "AWS_ACCESS_KEY" = "MINIO_USER",
+    "AWS_SECRET_KEY"="MINIO_PASSWORD",
+    "AWS_REGION" = "REGION",
+    "use_path_style" = "true"
+);
+```
+
+
+
 ### Keywords
 
 ```text


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org
For additional commands, e-mail: commits-h...@doris.apache.org

Reply via email to