This is an automated email from the ASF dual-hosted git repository.

gavinchou pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/doris-website.git


The following commit(s) were added to refs/heads/master by this push:
     new 0f9bfe12f7a [doc](cloud-deploy) add an example for 
CREATE-STORAGE-VAULT (#1174)
0f9bfe12f7a is described below

commit 0f9bfe12f7a3a94825c7734a5f28dcff816e7fc7
Author: yagagagaga <zhangminkefromflyd...@gmail.com>
AuthorDate: Tue Oct 15 20:13:11 2024 +0800

    [doc](cloud-deploy) add an example for CREATE-STORAGE-VAULT (#1174)
    
    # Versions
    
    - [x] dev
    - [x] 3.0
    - [ ] 2.1
    - [ ] 2.0
    
    # Languages
    
    - [x] Chinese
    - [x] English
---
 .../compute-storage-decoupled/before-deployment.md | 41 ++++++++--
 .../compilation-and-deployment.md                  |  2 +-
 .../managing-storage-vault.md                      | 22 +++---
 .../Create/CREATE-STORAGE-VAULT.md                 | 90 +++++++++------------
 .../compute-storage-decoupled/before-deployment.md | 39 ++++++++--
 .../managing-storage-vault.md                      | 22 +++---
 .../current/compute-storage-decoupled/overview.md  |  2 +-
 .../Create/CREATE-STORAGE-VAULT.md                 | 91 ++++++++++------------
 .../compute-storage-decoupled/before-deployment.md | 39 ++++++++--
 .../managing-storage-vault.md                      | 22 +++---
 .../Create/CREATE-STORAGE-VAULT.md                 | 91 ++++++++++------------
 .../compute-storage-decoupled/before-deployment.md | 45 +++++++++--
 .../compilation-and-deployment.md                  |  2 +-
 .../managing-storage-vault.md                      | 22 +++---
 .../Create/CREATE-STORAGE-VAULT.md                 | 89 +++++++++------------
 15 files changed, 345 insertions(+), 274 deletions(-)

diff --git a/docs/compute-storage-decoupled/before-deployment.md 
b/docs/compute-storage-decoupled/before-deployment.md
index 57eee697749..ed3d778393d 100644
--- a/docs/compute-storage-decoupled/before-deployment.md
+++ b/docs/compute-storage-decoupled/before-deployment.md
@@ -64,11 +64,11 @@ When machine configurations are high, consider mixing FDB, 
FE, and Meta Service,
 
 ## 5. Installation Steps
 
-### 5.1. Install FoundationDB
+### 5.1 Install FoundationDB
 
 This section provides a step-by-step guide to configure, deploy, and start the 
FoundationDB (FDB) service using the provided scripts `fdb_vars.sh` and 
`fdb_ctl.sh`. You can download [doris 
tools](http://apache-doris-releases.oss-accelerate.aliyuncs.com/apache-doris-3.0.2-tools.tar.gz)
 and get `fdb_vars.sh` and `fdb_ctl.sh` from `fdb` directory.
 
-#### 5.1 Machine Requirements
+#### 5.1.1 Machine Requirements
 
 Typically, at least 3 machines equipped with SSDs are required to form a 
FoundationDB cluster with dual data replicas and allow for single machine 
failures.
 
@@ -76,7 +76,7 @@ Typically, at least 3 machines equipped with SSDs are 
required to form a Foundat
 If only for development/testing purposes, a single machine is sufficient.
 :::
 
-#### 5.2 `fdb_vars.sh` Configuration
+#### 5.1.2 `fdb_vars.sh` Configuration
 
 ##### Required Custom Settings
 
@@ -95,7 +95,7 @@ If only for development/testing purposes, a single machine is 
sufficient.
 | `MEMORY_LIMIT_GB` | Define the memory limit for FDB processes in GB | 
Integer | `MEMORY_LIMIT_GB=16` | Adjust this value based on available memory 
resources and FDB process requirements |
 | `CPU_CORES_LIMIT` | Define the CPU core limit for FDB processes | Integer | 
`CPU_CORES_LIMIT=8` | Set this value based on the number of available CPU cores 
and FDB process requirements |
 
-#### 5.3 Deploy FDB Cluster
+#### 5.1.3 Deploy FDB Cluster
 
 After configuring the environment with `fdb_vars.sh`, you can deploy the FDB 
cluster on each node using the `fdb_ctl.sh` script.
 
@@ -105,7 +105,7 @@ After configuring the environment with `fdb_vars.sh`, you 
can deploy the FDB clu
 
 This command initiates the deployment process of the FDB cluster.
 
-### 5.4 Start FDB Service
+### 5.1.4 Start FDB Service
 
 Once the FDB cluster is deployed, you can start the FDB service using the 
`fdb_ctl.sh` script.
 
@@ -115,11 +115,40 @@ Once the FDB cluster is deployed, you can start the FDB 
service using the `fdb_c
 
 This command starts the FDB service, making the cluster operational and 
obtaining the FDB cluster connection string, which can be used for configuring 
the MetaService.
 
-### 5.5 Install OpenJDK 17
+### 5.2 Install OpenJDK 17
 
 1. Download [OpenJDK 
17](https://download.java.net/java/GA/jdk17.0.1/2a2082e5a09d4267845be086888add4f/12/GPL/openjdk-17.0.1_linux-x64_bin.tar.gz)
 2. Extract and set the environment variable JAVA_HOME.
 
+### 5.3 Install S3 or HDFS (Optional)
+
+The Apache Doris (cloud mode) stores data on S3 or HDFS services. If you 
already have the relevant services, you can use them directly. If not, this 
document provides a simple deployment tutorial for MinIO:
+
+1. Choose the appropriate version and operating system on 在 MinIO MinIO's 
[download page](https://min.io/download?license=agpl&platform=linux) and 
download the corresponding binary or installation packages for the server and 
client.
+2. start MinIO Server
+   ```bash
+   export MINIO_REGION_NAME=us-east-1
+   export MINIO_ROOT_USER=minio # In older versions, the configuration is 
MINIO_ACCESS_KEY=minio
+   export MINIO_ROOT_PASSWORD=minioadmin # In older versions, the 
configuration is MINIO_SECRET_KEY=minioadmin
+   nohup ./minio server /mnt/data 2>&1 &
+   ```
+3. config MinIO Client
+   ```bash
+   # If you are using a client installed with an installation package, the 
client name is mcli. If you directly download the client binary package, its 
name is mc
+   ./mc config host add myminio http://127.0.0.1:9000 minio minioadmin
+   ```
+4. create a bucket
+   ```bash
+   ./mc mb myminio/doris
+   ```
+5. verify if it is working properly
+   ```bash
+   # upload a file
+   ./mc mv test_file myminio/doris
+   # list files
+   ./mc ls myminio/doris
+   ```
+
 ## 6. Next Steps
 
 After completing the above preparations, please refer to the following 
documents to continue the deployment:
diff --git a/docs/compute-storage-decoupled/compilation-and-deployment.md 
b/docs/compute-storage-decoupled/compilation-and-deployment.md
index 1d3ec2e87f5..aac93a4b0a1 100644
--- a/docs/compute-storage-decoupled/compilation-and-deployment.md
+++ b/docs/compute-storage-decoupled/compilation-and-deployment.md
@@ -299,4 +299,4 @@ SET <storage_vault_name> AS DEFAULT STORAGE VAULT
 ## 7. Notes
 
 - Only the Meta Service process for metadata operations should be configured 
as the `meta_service_endpoint` target for FE and BE.
-- The data recycling function process should not be configured as the 
`meta_service_endpoint` target.
\ No newline at end of file
+- The data recycling function process should not be configured as the 
`meta_service_endpoint` target.
diff --git a/docs/compute-storage-decoupled/managing-storage-vault.md 
b/docs/compute-storage-decoupled/managing-storage-vault.md
index 2eb15f760f5..10c90558424 100644
--- a/docs/compute-storage-decoupled/managing-storage-vault.md
+++ b/docs/compute-storage-decoupled/managing-storage-vault.md
@@ -30,7 +30,7 @@ The Storage Vault is a remote shared storage used by Doris in 
a decoupled storag
 
 **Syntax**
 
-```SQL
+```sql
 CREATE STORAGE VAULT [IF NOT EXISTS] <vault_name>
 PROPERTIES
 ("key" = "value",...)
@@ -44,7 +44,7 @@ PROPERTIES
 
 To create an HDFS-based decoupled storage-compute Doris cluster, ensure that 
all nodes (including FE/BE nodes, Meta Service) have permission to access the 
specified HDFS, including completing Kerberos authorization configuration and 
connectivity checks in advance (which can be tested using Hadoop Client on each 
corresponding node).
 
-```SQL
+```sql
 CREATE STORAGE VAULT IF NOT EXISTS ssb_hdfs_vault
     PROPERTIES (
         "type"="hdfs",                                     -- required
@@ -59,7 +59,7 @@ CREATE STORAGE VAULT IF NOT EXISTS ssb_hdfs_vault
 
 **Creating an S3 Storage Vault**
 
-```SQL
+```sql
 CREATE STORAGE VAULT IF NOT EXISTS ssb_s3_vault
     PROPERTIES (
         "type"="S3",                                   -- required
@@ -73,6 +73,8 @@ CREATE STORAGE VAULT IF NOT EXISTS ssb_s3_vault
     );
 ```
 
+More parameter explanations and examples can be found in 
[CREATE-STORAGE-VAULT](../sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-STORAGE-VAULT.md).
+
 ### Viewing Storage Vaults
 
 **Syntax**
@@ -87,7 +89,7 @@ The returned result includes 4 columns: the Storage Vault 
name, Storage Vault ID
 
 **Syntax**
 
-```SQL
+```sql
 SET <vault_name> AS DEFAULT STORAGE VAULT
 ```
 
@@ -97,7 +99,7 @@ When creating a table, specify `storage_vault_name` in 
`PROPERTIES`, and the dat
 
 **Example**
 
-```SQL
+```sql
 CREATE TABLE IF NOT EXISTS supplier (
   s_suppkey int(11) NOT NULL COMMENT "",
   s_name varchar(26) NOT NULL COMMENT "",
@@ -133,7 +135,7 @@ Grant a specified MySQL user the usage permission for a 
certain Storage Vault, a
 
 **Syntax**
 
-```SQL
+```sql
 GRANT
     USAGE_PRIV
     ON STORAGE VAULT <vault_name>
@@ -147,7 +149,7 @@ Only Admin users have the authority to execute the `GRANT` 
statement, which is u
 
 **Example**
 
-```SQL
+```sql
 grant usage_priv on storage vault my_storage_vault to user1
 ```
 
@@ -155,7 +157,7 @@ Revoke the Storage Vault permissions of a specified MySQL 
user.
 
 **Syntax**
 
-```SQL
+```sql
 REVOKE 
     USAGE_PRIV
     ON STORAGE VAULT <vault_name>
@@ -166,6 +168,6 @@ Only Admin users have the authority to execute the `REVOKE` 
statement, which is
 
 **Example**
 
-```SQL
+```sql
 revoke usage_priv on storage vault my_storage_vault from user1
-```
\ No newline at end of file
+```
diff --git 
a/docs/sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-STORAGE-VAULT.md
 
b/docs/sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-STORAGE-VAULT.md
index c0694eb1c38..99c301dd100 100644
--- 
a/docs/sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-STORAGE-VAULT.md
+++ 
b/docs/sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-STORAGE-VAULT.md
@@ -40,66 +40,32 @@ CREATE STORAGE VAULT [IF NOT EXISTS] vault
 
 #### properties
 
-* `type`
-
-   Only two types of vaults are allowed: S3 and HDFS. -- Required
+| param  | is required | desc                                                  
 |
+|:-------|:------------|:-------------------------------------------------------|
+| `type` | required    | Only two types of vaults are allowed: `S3` and 
`HDFS`. |
 
 ##### S3 Vault
 
-* `s3.endpoint`
-
-   The endpoint used for object storage. **Notice**, please don't provide the 
endpoint with any `http://` or `https://`. And for Azure Blob Storage, the 
endpoint should be like `${ak}.blob.core.windows.net/`. -- Required
-
-* `s3.region`
-
-   The region of your bucket.(Not required when you'r using GCP or AZURE). -- 
Required
-
-* `s3.root.path`
-
-   The path where the data would be stored. -- Required
-
-* `s3.bucket`
-
-    The bucket of your object storage account. (StorageAccount if you're using 
Azure). -- Required
-
-* `s3.access_key`
-
-   The access key of your object storage account. (AccountName if you're using 
Azure). -- Required
-
-* `s3.secret_key`
-
-   The secret key of your object storage account. (AccountKey if you're using 
Azure). -- Required
-
-* `provider`
-
-   The cloud vendor which provides the object storage service. -- Required 
+| param           | is required | desc                                         
                                                                                
                                                                                
      |
+|:----------------|:------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| `s3.endpoint`   | required    | The endpoint used for object storage. 
<br/>**Notice**, please don't provide the endpoint with any `http://` or 
`https://`. And for Azure Blob Storage, the endpoint should be like 
`${ak}.blob.core.windows.net/`. |
+| `s3.region`     | required    | The region of your bucket.(Not required when 
you'r using GCP or AZURE).                                                      
                                                                         |
+| `s3.root.path`  | required    | The path where the data would be stored.     
                                                                                
                                           |
+| `s3.bucket`     | required    | The bucket of your object storage account. 
(StorageAccount if you're using Azure).                                         
                                                                                
       |
+| `s3.access_key` | required    | The access key of your object storage 
account. (AccountName if you're using Azure).                                   
                                                                                
             |
+| `s3.secret_key` | required    | The secret key of your object storage 
account. (AccountKey if you're using Azure).                                    
                                                                                
            |
+| `provider`      | required    | The cloud vendor which provides the object 
storage service. The supported values include `COS`, `OSS`, `S3`, `OBS`, `BOS`, 
`AZURE`, `GCP`                                                                  
                                                              |
 
 ##### HDFS Vault
 
-* `fs.defaultFS`
-
-   Hadoop configuration property that specifies the default file system to 
use. -- Required
-
-* `path_prefix`
-
-   The path prefix to where the data would be stored. -- optional. It would be 
the root_path of your Hadoop user if you don't provide any prefix.
-
-* `hadoop.username`
-
-   Hadoop configuration property that specifies the user accessing the file 
system. -- optional. It would be the user starting Hadoop process if you don't 
provide any user.
-
-* `hadoop.security.authentication`
-
-   The authentication way used for hadoop. -- optional. If you'd like to use 
kerberos you can provide with `kerboros`.
-
-* `hadoop.kerberos.principal`
-
-   The path to your kerberos principal. -- optional
-
-* `hadoop.kerberos.keytab`
-
-   The path to your kerberos keytab. -- optional
-
+| param                            | is required | desc                        
                                                                                
                                                 |
+|:---------------------------------|:------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| `fs.defaultFS`                   | required    | Hadoop configuration 
property that specifies the default file system to use.                         
                                                        |
+| `path_prefix`                    | optional    | The path prefix to where 
the data would be stored. It would be the root_path of your Hadoop user if you 
don't provide any prefix.                            |
+| `hadoop.username`                | optional    | Hadoop configuration 
property that specifies the user accessing the file system. It would be the 
user starting Hadoop process if you don't provide any user. |
+| `hadoop.security.authentication` | optional    | The authentication way used 
for hadoop. If you'd like to use kerberos you can provide with `kerboros`.      
                                                 |
+| `hadoop.kerberos.principal`      | optional    | The path to your kerberos 
principal.                                                       |
+| `hadoop.kerberos.keytab`         | optional    | The path to your kerberos 
keytab.                                                       |
 
 ### Example
 
@@ -185,7 +151,21 @@ CREATE STORAGE VAULT [IF NOT EXISTS] vault
         "provider" = "S3"
         );
     ```
+7. create a S3 storage vault using MinIO.
+   ```sql
+    CREATE STORAGE VAULT IF NOT EXISTS s3_vault
+        PROPERTIES (
+        "type"="S3",
+        "s3.endpoint"="127.0.0.1:9000",
+        "s3.access_key" = "ak",
+        "s3.secret_key" = "sk",
+        "s3.region" = "us-east-1",
+        "s3.root.path" = "ssb_sf1_p2_s3",
+        "s3.bucket" = "doris-build-1308700295",
+        "provider" = "S3"
+        );
+   ```
 
 ### Keywords
 
-    CREATE, STORAGE VAULT
\ No newline at end of file
+    CREATE, STORAGE VAULT
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/compute-storage-decoupled/before-deployment.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/compute-storage-decoupled/before-deployment.md
index b255858ab6f..30fd00d46ea 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/compute-storage-decoupled/before-deployment.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/compute-storage-decoupled/before-deployment.md
@@ -70,7 +70,7 @@ Doris 存算分离架构包含三个主要模块:
 
 本节提供了脚本 `fdb_vars.sh` 和 `fdb_ctl.sh` 配置、部署和启动 FDB(FoundationDB)服务的分步指南。您可以下载 
[doris 
tools](http://apache-doris-releases.oss-accelerate.aliyuncs.com/apache-doris-3.0.2-tools.tar.gz)
 并从 `fdb` 目录获取 `fdb_vars.sh` 和 `fdb_ctl.sh`。
 
-#### 5.1 机器要求
+#### 5.1.1 机器要求
 
 通常,至少需要 3 台配备 SSD 的机器来形成具有双数据副本并允许单机故障的 FoundationDB 集群。
 
@@ -78,7 +78,7 @@ Doris 存算分离架构包含三个主要模块:
 如果仅用于开发/测试目的,单台机器就足够了。
 :::
 
-#### 5.2 `fdb_vars.sh` 配置
+#### 5.1.2 `fdb_vars.sh` 配置
 
 ##### 必需的自定义设置
 
@@ -97,7 +97,7 @@ Doris 存算分离架构包含三个主要模块:
 | `MEMORY_LIMIT_GB` | 定义 FDB 进程的内存限制,单位为 GB | 整数 | `MEMORY_LIMIT_GB=16` | 
根据可用内存资源和 FDB 进程的要求调整此值 |
 | `CPU_CORES_LIMIT` | 定义 FDB 进程的 CPU 核心限制 | 整数 | `CPU_CORES_LIMIT=8` | 根据可用的 
CPU 核心数量和 FDB 进程的要求设置此值 |
 
-#### 5.3 部署 FDB 集群
+#### 5.1.3 部署 FDB 集群
 
 使用 `fdb_vars.sh` 配置环境后,您可以在每个节点上使用 `fdb_ctl.sh` 脚本部署 FDB 集群。
 
@@ -107,7 +107,7 @@ Doris 存算分离架构包含三个主要模块:
 
 此命令启动 FDB 集群的部署过程。
 
-### 5.4 启动 FDB 服务
+#### 5.1.4 启动 FDB 服务
 
 FDB 集群部署完成后,您可以使用 `fdb_ctl.sh` 脚本启动 FDB 服务。
 
@@ -117,11 +117,40 @@ FDB 集群部署完成后,您可以使用 `fdb_ctl.sh` 脚本启动 FDB 服务
 
 此命令启动 FDB 服务,使集群工作并获取 FDB 集群连接字符串,后续可以用于配置 MetaService。
 
-### 5.5 安装 OpenJDK 17
+### 5.2 安装 OpenJDK 17
 
 1. 下载 [OpenJDK 
17](https://download.java.net/java/GA/jdk17.0.1/2a2082e5a09d4267845be086888add4f/12/GPL/openjdk-17.0.1_linux-x64_bin.tar.gz)
 2. 解压并设置环境变量 JAVA_HOME.
 
+### 5.3 安装 S3 或 HDFS 服务(可选)
+
+Apache Doris 存算分离模式会将数据存储在 S3 服务或 HDFS 服务上面,如果您已经有相关服务,直接使用即可。如果没有,本文档提供 MinIO 
的简单部署教程:
+
+1. 在 MinIO 
的[下载页面](https://min.io/download?license=agpl&platform=linux)选择合适的版本以及操作系统,下载对应的 
Server 以及 Client 的二进制包或安装包。
+2. 启动 MinIO Server
+   ```bash
+   export MINIO_REGION_NAME=us-east-1
+   export MINIO_ROOT_USER=minio # 在较老版本中,该配置为 MINIO_ACCESS_KEY=minio
+   export MINIO_ROOT_PASSWORD=minioadmin # 在较老版本中,该配置为 
MINIO_SECRET_KEY=minioadmin
+   nohup ./minio server /mnt/data 2>&1 &
+   ```
+3. 配置 MinIO Client
+   ```bash
+   # 如果你使用的是安装包安装的客户端,那么客户端名为 mcli,直接下载客户端二进制包,则其名为 mc
+   ./mc config host add myminio http://127.0.0.1:9000 minio minioadmin
+   ```
+4. 创建一个桶
+   ```bash
+   ./mc mb myminio/doris
+   ```
+5. 验证是否正常工作
+   ```bash
+   # 上传一个文件
+   ./mc mv test_file myminio/doris
+   # 查看这个文件
+   ./mc ls myminio/doris
+   ```
+
 ## 6. 后续步骤
 
 完成上述准备工作后,请参考以下文档继续部署:
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/compute-storage-decoupled/managing-storage-vault.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/compute-storage-decoupled/managing-storage-vault.md
index afe08c4101f..94b9f55ceee 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/compute-storage-decoupled/managing-storage-vault.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/compute-storage-decoupled/managing-storage-vault.md
@@ -30,7 +30,7 @@ Storage Vault 是 Doris 在存算分离模式中所使用的远程共享存储
 
 **语法**
 
-```SQL
+```sql
 CREATE STORAGE VAULT [IF NOT EXISTS] <vault_name>
 PROPERTIES
 ("key" = "value",...)
@@ -44,7 +44,7 @@ PROPERTIES
 
 创建基于 HDFS 的存算分离模式 Doris 集群,需要确保所有的节点(包括 FE / BE 节点、Meta Service) 均有权限访问所指定的 
HDFS,包括提前完成机器的 Kerberos 授权配置和连通性检查(可在对应的每个节点上使用 Hadoop Client 进行测试)等。
 
-```SQL
+```sql
 CREATE STORAGE VAULT IF NOT EXISTS ssb_hdfs_vault
     PROPERTIES (
         "type"="hdfs",                                     -- required
@@ -73,11 +73,13 @@ CREATE STORAGE VAULT IF NOT EXISTS ssb_s3_vault
     );
 ```
 
+更多参数说明及示例可见 
[CREATE-STORAGE-VAULT](../sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-STORAGE-VAULT.md)。
+
 ### 查看 Storage Vault 
 
 **语法**
 
-```Plain
+```sql
 SHOW STORAGE VAULTS
 ```
 
@@ -87,7 +89,7 @@ SHOW STORAGE VAULTS
 
 **语法**
 
-```SQL
+```sql
 SET <vault_name> AS DEFAULT STORAGE VAULT
 ```
 
@@ -97,7 +99,7 @@ SET <vault_name> AS DEFAULT STORAGE VAULT
 
 **示例**
 
-```SQL
+```sql
 CREATE TABLE IF NOT EXISTS supplier (
   s_suppkey int(11) NOT NULL COMMENT "",
   s_name varchar(26) NOT NULL COMMENT "",
@@ -133,7 +135,7 @@ Coming soon
 
 **语法**
 
-```SQL
+```sql
 GRANT
     USAGE_PRIV
     ON STORAGE VAULT <vault_name>
@@ -147,7 +149,7 @@ GRANT
 
 **示例**
 
-```SQL
+```sql
 grant usage_priv on storage vault my_storage_vault to user1
 ```
 
@@ -155,7 +157,7 @@ grant usage_priv on storage vault my_storage_vault to user1
 
 **语法**
 
-```SQL
+```sql
 REVOKE 
     USAGE_PRIV
     ON STORAGE VAULT <vault_name>
@@ -166,6 +168,6 @@ REVOKE
 
 **示例**
 
-```SQL
+```sql
 revoke usage_priv on storage vault my_storage_vault from user1
-```
\ No newline at end of file
+```
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/compute-storage-decoupled/overview.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/compute-storage-decoupled/overview.md
index 3ef130ed36b..11ecb9b888b 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/compute-storage-decoupled/overview.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/compute-storage-decoupled/overview.md
@@ -96,4 +96,4 @@ Meta Service 是 Doris 存算分离元数据服务,主要负责处理导入事
 
 **读写隔离**:Doris 的导入会消耗资源,特别是在大数据量和高频导入场景。为了避免查询和导入之间的资源竞争,可以通过 `use @c1`,`use 
@c2`指定查询请求在 C1 上执行,导入请求在 C2 上执行。同时,`c1`计算组可以访问`c2`计算组中新导入的数据。
 
-**写写隔离**:与读写隔离同理,导入和导入之间同样可以进行隔离。例如,当系统中存在高频小量导入和大批量导入时,批量导入往往耗时长,重试成本高,而高频小量导入单次耗时短,重试成本低,为了避免小量导入对批量导入造成干扰,可以通过`use
 @c1`,`use @c2`,将小量导入指定到 `c1` 上执行,批量导入指定到 `c2` 上执行。
\ No newline at end of file
+**写写隔离**:与读写隔离同理,导入和导入之间同样可以进行隔离。例如,当系统中存在高频小量导入和大批量导入时,批量导入往往耗时长,重试成本高,而高频小量导入单次耗时短,重试成本低,为了避免小量导入对批量导入造成干扰,可以通过`use
 @c1`,`use @c2`,将小量导入指定到 `c1` 上执行,批量导入指定到 `c2` 上执行。
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-STORAGE-VAULT.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-STORAGE-VAULT.md
index 708e6b5f76d..6ece30d2525 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-STORAGE-VAULT.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-STORAGE-VAULT.md
@@ -1,7 +1,7 @@
 ---
 {
     "title": "CREATE-STORAGE-VAULT",
-    "language": "en",
+    "language": "zh-CN",
     "toc_min_heading_level": 2,
     "toc_max_heading_level": 4
 }
@@ -39,57 +39,36 @@ CREATE STORAGE VAULT [IF NOT EXISTS] vault
 
 #### properties
 
-* `type`
-    只允许两种类型的存储库:S3 和 HDFS。-- 必需
+| 参数     | 是否必需 | 描述                     |
+|:-------|:-----|:-----------------------|
+| `type` | 必需   | 只允许两种类型的存储库:S3 和 HDFS。 |
 
 ##### S3 Vault
 
-`s3.endpoint` 
-    用于对象存储的端点。注意,请不要提供带有 http:// 或 https:// 的端点。对于 Azure Blob 存储,端点应该像 
${ak}.blob.core.windows.net/。-- 必需
-
-`s3.region` 
-    您的存储桶的区域。(如果您使用 GCP 或 AZURE,则不需要)。-- 必需
-
-`s3.root.path` 
-    存储数据的路径。-- 必需
-
-`s3.bucket` 
-    您的对象存储账户的存储桶。(如果您使用 Azure,则为 StorageAccount)。-- 必需
-
-`s3.access_key` 
-    您的对象存储账户的访问密钥。(如果您使用 Azure,则为 AccountName)。-- 必需
-
-`s3.secret_key` 
-    您的对象存储账户的秘密密钥。(如果您使用 Azure,则为 AccountKey)。-- 必需
-
-`provider` 
-    提供对象存储服务的云供应商。-- 必需
-
+| 参数              | 是否必需 | 描述                                                  
                                                    |
+|:----------------|:-----|:--------------------------------------------------------------------------------------------------------|
+| `s3.endpoint`   | 必需   | 用于对象存储的端点。<br/>注意,请不要提供带有 http:// 或 https:// 
开头的链接。对于 Azure Blob 存储,链接应该像 ${ak}.blob.core.windows.net/。 |
+| `s3.region`     | 必需   | 您的存储桶的区域。(如果您使用 GCP 或 AZURE,则不需要)。 |
+| `s3.root.path`  | 必需   | 存储数据的路径。 |
+| `s3.bucket`     | 必需   | 您的对象存储账户的存储桶。(如果您使用 Azure,则为 StorageAccount)。 |
+| `s3.access_key` | 必需   | 您的对象存储账户的访问密钥。(如果您使用 Azure,则为 AccountName)。 |
+| `s3.secret_key` | 必需   | 您的对象存储账户的秘密密钥。(如果您使用 Azure,则为 AccountKey)。 |
+| `provider`      | 必需   | 
提供对象存储服务的云供应商。支持的值有`COS`,`OSS`,`S3`,`OBS`,`BOS`,`AZURE`,`GCP` |
 
 ##### HDFS vault
 
-`fs.defaultFS` 
-    Hadoop 配置属性,指定要使用的默认文件系统。-- 必需
-
-`path_prefix` 
-    存储数据的路径前缀。-- 可选. 如果没有指定则会使用user账户下的默认路径。
-
-`hadoop.username` 
-    Hadoop 配置属性,指定访问文件系统的用户。-- 可选. 如果没有指定则会使用启动hadoop进程的user.
-
-`hadoop.security.authentication` 
-    用于 hadoop 的认证方式。-- 可选. 如果希望使用kerberos则可以填写`kerberos`.
-
-`hadoop.kerberos.principal` 
-    您的 kerberos 主体的路径。-- 可选
-
-`hadoop.kerberos.keytab` 
-    您的 kerberos keytab 的路径。-- 可选
-
+| 参数                               | 是否必需 | 描述                                 
                   |
+|:---------------------------------|:-----|:------------------------------------------------------|
+| `fs.defaultFS`                   |必需| Hadoop 配置属性,指定要使用的默认文件系统。              
               |
+| `path_prefix`                    |可选| 存储数据的路径前缀。如果没有指定则会使用 user 账户下的默认路径。    
               |
+| `hadoop.username`                |可选| Hadoop 配置属性,指定访问文件系统的用户。如果没有指定则会使用启动 
hadoop 进程的 user。 |
+| `hadoop.security.authentication` |可选| 用于 hadoop 的认证方式。如果希望使用 kerberos 
则可以填写`kerberos`。      |
+| `hadoop.kerberos.principal`      |可选| 您的 kerberos 主体的路径。      |
+| `hadoop.kerberos.keytab`         |可选|  您的 kerberos keytab 的路径。      |
 
 ### 示例
 
-1. create a HDFS storage vault.
+1. 创建 HDFS storage vault。
     ```sql
     CREATE STORAGE VAULT IF NOT EXISTS hdfs_vault
         PROPERTIES (
@@ -98,7 +77,7 @@ CREATE STORAGE VAULT [IF NOT EXISTS] vault
         );
     ```
 
-2. create a S3 storage vault using azure.
+2. 创建微软 azure S3 storage vault。
     ```sql
     CREATE STORAGE VAULT IF NOT EXISTS s3_vault
         PROPERTIES (
@@ -112,7 +91,7 @@ CREATE STORAGE VAULT [IF NOT EXISTS] vault
         );
     ```
 
-3. create a S3 storage vault using OSS.
+3. 创建阿里云 OSS S3 storage vault。
     ```sql
     CREATE STORAGE VAULT IF NOT EXISTS s3_vault
         PROPERTIES (
@@ -127,7 +106,7 @@ CREATE STORAGE VAULT [IF NOT EXISTS] vault
         );
     ```
 
-4. create a S3 storage vault using COS.
+4. 创建腾讯云 COS S3 storage vault。
     ```sql
     CREATE STORAGE VAULT IF NOT EXISTS s3_vault
         PROPERTIES (
@@ -142,7 +121,7 @@ CREATE STORAGE VAULT [IF NOT EXISTS] vault
         );
     ```
 
-5. create a S3 storage vault using OBS.
+5. 创建华为云 OBS S3 storage vault。
     ```sql
     CREATE STORAGE VAULT IF NOT EXISTS s3_vault
         PROPERTIES (
@@ -157,7 +136,7 @@ CREATE STORAGE VAULT [IF NOT EXISTS] vault
         );
     ```
 
-6. create a S3 storage vault using AWS.
+6. 创建亚马逊云 S3 storage vault。
     ```sql
     CREATE STORAGE VAULT IF NOT EXISTS s3_vault
         PROPERTIES (
@@ -171,7 +150,21 @@ CREATE STORAGE VAULT [IF NOT EXISTS] vault
         "provider" = "S3"
         );
     ```
+7. 创建 MinIO S3 storage vault。
+   ```sql
+    CREATE STORAGE VAULT IF NOT EXISTS s3_vault
+        PROPERTIES (
+        "type"="S3",
+        "s3.endpoint"="127.0.0.1:9000",
+        "s3.access_key" = "ak",
+        "s3.secret_key" = "sk",
+        "s3.region" = "us-east-1",
+        "s3.root.path" = "ssb_sf1_p2_s3",
+        "s3.bucket" = "doris-build-1308700295",
+        "provider" = "S3"
+        );
+   ```
 
 ### 关键词
 
-    CREATE, STORAGE VAULT
\ No newline at end of file
+    CREATE, STORAGE VAULT
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/compute-storage-decoupled/before-deployment.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/compute-storage-decoupled/before-deployment.md
index b255858ab6f..30fd00d46ea 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/compute-storage-decoupled/before-deployment.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/compute-storage-decoupled/before-deployment.md
@@ -70,7 +70,7 @@ Doris 存算分离架构包含三个主要模块:
 
 本节提供了脚本 `fdb_vars.sh` 和 `fdb_ctl.sh` 配置、部署和启动 FDB(FoundationDB)服务的分步指南。您可以下载 
[doris 
tools](http://apache-doris-releases.oss-accelerate.aliyuncs.com/apache-doris-3.0.2-tools.tar.gz)
 并从 `fdb` 目录获取 `fdb_vars.sh` 和 `fdb_ctl.sh`。
 
-#### 5.1 机器要求
+#### 5.1.1 机器要求
 
 通常,至少需要 3 台配备 SSD 的机器来形成具有双数据副本并允许单机故障的 FoundationDB 集群。
 
@@ -78,7 +78,7 @@ Doris 存算分离架构包含三个主要模块:
 如果仅用于开发/测试目的,单台机器就足够了。
 :::
 
-#### 5.2 `fdb_vars.sh` 配置
+#### 5.1.2 `fdb_vars.sh` 配置
 
 ##### 必需的自定义设置
 
@@ -97,7 +97,7 @@ Doris 存算分离架构包含三个主要模块:
 | `MEMORY_LIMIT_GB` | 定义 FDB 进程的内存限制,单位为 GB | 整数 | `MEMORY_LIMIT_GB=16` | 
根据可用内存资源和 FDB 进程的要求调整此值 |
 | `CPU_CORES_LIMIT` | 定义 FDB 进程的 CPU 核心限制 | 整数 | `CPU_CORES_LIMIT=8` | 根据可用的 
CPU 核心数量和 FDB 进程的要求设置此值 |
 
-#### 5.3 部署 FDB 集群
+#### 5.1.3 部署 FDB 集群
 
 使用 `fdb_vars.sh` 配置环境后,您可以在每个节点上使用 `fdb_ctl.sh` 脚本部署 FDB 集群。
 
@@ -107,7 +107,7 @@ Doris 存算分离架构包含三个主要模块:
 
 此命令启动 FDB 集群的部署过程。
 
-### 5.4 启动 FDB 服务
+#### 5.1.4 启动 FDB 服务
 
 FDB 集群部署完成后,您可以使用 `fdb_ctl.sh` 脚本启动 FDB 服务。
 
@@ -117,11 +117,40 @@ FDB 集群部署完成后,您可以使用 `fdb_ctl.sh` 脚本启动 FDB 服务
 
 此命令启动 FDB 服务,使集群工作并获取 FDB 集群连接字符串,后续可以用于配置 MetaService。
 
-### 5.5 安装 OpenJDK 17
+### 5.2 安装 OpenJDK 17
 
 1. 下载 [OpenJDK 
17](https://download.java.net/java/GA/jdk17.0.1/2a2082e5a09d4267845be086888add4f/12/GPL/openjdk-17.0.1_linux-x64_bin.tar.gz)
 2. 解压并设置环境变量 JAVA_HOME.
 
+### 5.3 安装 S3 或 HDFS 服务(可选)
+
+Apache Doris 存算分离模式会将数据存储在 S3 服务或 HDFS 服务上面,如果您已经有相关服务,直接使用即可。如果没有,本文档提供 MinIO 
的简单部署教程:
+
+1. 在 MinIO 
的[下载页面](https://min.io/download?license=agpl&platform=linux)选择合适的版本以及操作系统,下载对应的 
Server 以及 Client 的二进制包或安装包。
+2. 启动 MinIO Server
+   ```bash
+   export MINIO_REGION_NAME=us-east-1
+   export MINIO_ROOT_USER=minio # 在较老版本中,该配置为 MINIO_ACCESS_KEY=minio
+   export MINIO_ROOT_PASSWORD=minioadmin # 在较老版本中,该配置为 
MINIO_SECRET_KEY=minioadmin
+   nohup ./minio server /mnt/data 2>&1 &
+   ```
+3. 配置 MinIO Client
+   ```bash
+   # 如果你使用的是安装包安装的客户端,那么客户端名为 mcli,直接下载客户端二进制包,则其名为 mc
+   ./mc config host add myminio http://127.0.0.1:9000 minio minioadmin
+   ```
+4. 创建一个桶
+   ```bash
+   ./mc mb myminio/doris
+   ```
+5. 验证是否正常工作
+   ```bash
+   # 上传一个文件
+   ./mc mv test_file myminio/doris
+   # 查看这个文件
+   ./mc ls myminio/doris
+   ```
+
 ## 6. 后续步骤
 
 完成上述准备工作后,请参考以下文档继续部署:
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/compute-storage-decoupled/managing-storage-vault.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/compute-storage-decoupled/managing-storage-vault.md
index afe08c4101f..94b9f55ceee 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/compute-storage-decoupled/managing-storage-vault.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/compute-storage-decoupled/managing-storage-vault.md
@@ -30,7 +30,7 @@ Storage Vault 是 Doris 在存算分离模式中所使用的远程共享存储
 
 **语法**
 
-```SQL
+```sql
 CREATE STORAGE VAULT [IF NOT EXISTS] <vault_name>
 PROPERTIES
 ("key" = "value",...)
@@ -44,7 +44,7 @@ PROPERTIES
 
 创建基于 HDFS 的存算分离模式 Doris 集群,需要确保所有的节点(包括 FE / BE 节点、Meta Service) 均有权限访问所指定的 
HDFS,包括提前完成机器的 Kerberos 授权配置和连通性检查(可在对应的每个节点上使用 Hadoop Client 进行测试)等。
 
-```SQL
+```sql
 CREATE STORAGE VAULT IF NOT EXISTS ssb_hdfs_vault
     PROPERTIES (
         "type"="hdfs",                                     -- required
@@ -73,11 +73,13 @@ CREATE STORAGE VAULT IF NOT EXISTS ssb_s3_vault
     );
 ```
 
+更多参数说明及示例可见 
[CREATE-STORAGE-VAULT](../sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-STORAGE-VAULT.md)。
+
 ### 查看 Storage Vault 
 
 **语法**
 
-```Plain
+```sql
 SHOW STORAGE VAULTS
 ```
 
@@ -87,7 +89,7 @@ SHOW STORAGE VAULTS
 
 **语法**
 
-```SQL
+```sql
 SET <vault_name> AS DEFAULT STORAGE VAULT
 ```
 
@@ -97,7 +99,7 @@ SET <vault_name> AS DEFAULT STORAGE VAULT
 
 **示例**
 
-```SQL
+```sql
 CREATE TABLE IF NOT EXISTS supplier (
   s_suppkey int(11) NOT NULL COMMENT "",
   s_name varchar(26) NOT NULL COMMENT "",
@@ -133,7 +135,7 @@ Coming soon
 
 **语法**
 
-```SQL
+```sql
 GRANT
     USAGE_PRIV
     ON STORAGE VAULT <vault_name>
@@ -147,7 +149,7 @@ GRANT
 
 **示例**
 
-```SQL
+```sql
 grant usage_priv on storage vault my_storage_vault to user1
 ```
 
@@ -155,7 +157,7 @@ grant usage_priv on storage vault my_storage_vault to user1
 
 **语法**
 
-```SQL
+```sql
 REVOKE 
     USAGE_PRIV
     ON STORAGE VAULT <vault_name>
@@ -166,6 +168,6 @@ REVOKE
 
 **示例**
 
-```SQL
+```sql
 revoke usage_priv on storage vault my_storage_vault from user1
-```
\ No newline at end of file
+```
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-STORAGE-VAULT.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-STORAGE-VAULT.md
index 708e6b5f76d..6ece30d2525 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-STORAGE-VAULT.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-STORAGE-VAULT.md
@@ -1,7 +1,7 @@
 ---
 {
     "title": "CREATE-STORAGE-VAULT",
-    "language": "en",
+    "language": "zh-CN",
     "toc_min_heading_level": 2,
     "toc_max_heading_level": 4
 }
@@ -39,57 +39,36 @@ CREATE STORAGE VAULT [IF NOT EXISTS] vault
 
 #### properties
 
-* `type`
-    只允许两种类型的存储库:S3 和 HDFS。-- 必需
+| 参数     | 是否必需 | 描述                     |
+|:-------|:-----|:-----------------------|
+| `type` | 必需   | 只允许两种类型的存储库:S3 和 HDFS。 |
 
 ##### S3 Vault
 
-`s3.endpoint` 
-    用于对象存储的端点。注意,请不要提供带有 http:// 或 https:// 的端点。对于 Azure Blob 存储,端点应该像 
${ak}.blob.core.windows.net/。-- 必需
-
-`s3.region` 
-    您的存储桶的区域。(如果您使用 GCP 或 AZURE,则不需要)。-- 必需
-
-`s3.root.path` 
-    存储数据的路径。-- 必需
-
-`s3.bucket` 
-    您的对象存储账户的存储桶。(如果您使用 Azure,则为 StorageAccount)。-- 必需
-
-`s3.access_key` 
-    您的对象存储账户的访问密钥。(如果您使用 Azure,则为 AccountName)。-- 必需
-
-`s3.secret_key` 
-    您的对象存储账户的秘密密钥。(如果您使用 Azure,则为 AccountKey)。-- 必需
-
-`provider` 
-    提供对象存储服务的云供应商。-- 必需
-
+| 参数              | 是否必需 | 描述                                                  
                                                    |
+|:----------------|:-----|:--------------------------------------------------------------------------------------------------------|
+| `s3.endpoint`   | 必需   | 用于对象存储的端点。<br/>注意,请不要提供带有 http:// 或 https:// 
开头的链接。对于 Azure Blob 存储,链接应该像 ${ak}.blob.core.windows.net/。 |
+| `s3.region`     | 必需   | 您的存储桶的区域。(如果您使用 GCP 或 AZURE,则不需要)。 |
+| `s3.root.path`  | 必需   | 存储数据的路径。 |
+| `s3.bucket`     | 必需   | 您的对象存储账户的存储桶。(如果您使用 Azure,则为 StorageAccount)。 |
+| `s3.access_key` | 必需   | 您的对象存储账户的访问密钥。(如果您使用 Azure,则为 AccountName)。 |
+| `s3.secret_key` | 必需   | 您的对象存储账户的秘密密钥。(如果您使用 Azure,则为 AccountKey)。 |
+| `provider`      | 必需   | 
提供对象存储服务的云供应商。支持的值有`COS`,`OSS`,`S3`,`OBS`,`BOS`,`AZURE`,`GCP` |
 
 ##### HDFS vault
 
-`fs.defaultFS` 
-    Hadoop 配置属性,指定要使用的默认文件系统。-- 必需
-
-`path_prefix` 
-    存储数据的路径前缀。-- 可选. 如果没有指定则会使用user账户下的默认路径。
-
-`hadoop.username` 
-    Hadoop 配置属性,指定访问文件系统的用户。-- 可选. 如果没有指定则会使用启动hadoop进程的user.
-
-`hadoop.security.authentication` 
-    用于 hadoop 的认证方式。-- 可选. 如果希望使用kerberos则可以填写`kerberos`.
-
-`hadoop.kerberos.principal` 
-    您的 kerberos 主体的路径。-- 可选
-
-`hadoop.kerberos.keytab` 
-    您的 kerberos keytab 的路径。-- 可选
-
+| 参数                               | 是否必需 | 描述                                 
                   |
+|:---------------------------------|:-----|:------------------------------------------------------|
+| `fs.defaultFS`                   |必需| Hadoop 配置属性,指定要使用的默认文件系统。              
               |
+| `path_prefix`                    |可选| 存储数据的路径前缀。如果没有指定则会使用 user 账户下的默认路径。    
               |
+| `hadoop.username`                |可选| Hadoop 配置属性,指定访问文件系统的用户。如果没有指定则会使用启动 
hadoop 进程的 user。 |
+| `hadoop.security.authentication` |可选| 用于 hadoop 的认证方式。如果希望使用 kerberos 
则可以填写`kerberos`。      |
+| `hadoop.kerberos.principal`      |可选| 您的 kerberos 主体的路径。      |
+| `hadoop.kerberos.keytab`         |可选|  您的 kerberos keytab 的路径。      |
 
 ### 示例
 
-1. create a HDFS storage vault.
+1. 创建 HDFS storage vault。
     ```sql
     CREATE STORAGE VAULT IF NOT EXISTS hdfs_vault
         PROPERTIES (
@@ -98,7 +77,7 @@ CREATE STORAGE VAULT [IF NOT EXISTS] vault
         );
     ```
 
-2. create a S3 storage vault using azure.
+2. 创建微软 azure S3 storage vault。
     ```sql
     CREATE STORAGE VAULT IF NOT EXISTS s3_vault
         PROPERTIES (
@@ -112,7 +91,7 @@ CREATE STORAGE VAULT [IF NOT EXISTS] vault
         );
     ```
 
-3. create a S3 storage vault using OSS.
+3. 创建阿里云 OSS S3 storage vault。
     ```sql
     CREATE STORAGE VAULT IF NOT EXISTS s3_vault
         PROPERTIES (
@@ -127,7 +106,7 @@ CREATE STORAGE VAULT [IF NOT EXISTS] vault
         );
     ```
 
-4. create a S3 storage vault using COS.
+4. 创建腾讯云 COS S3 storage vault。
     ```sql
     CREATE STORAGE VAULT IF NOT EXISTS s3_vault
         PROPERTIES (
@@ -142,7 +121,7 @@ CREATE STORAGE VAULT [IF NOT EXISTS] vault
         );
     ```
 
-5. create a S3 storage vault using OBS.
+5. 创建华为云 OBS S3 storage vault。
     ```sql
     CREATE STORAGE VAULT IF NOT EXISTS s3_vault
         PROPERTIES (
@@ -157,7 +136,7 @@ CREATE STORAGE VAULT [IF NOT EXISTS] vault
         );
     ```
 
-6. create a S3 storage vault using AWS.
+6. 创建亚马逊云 S3 storage vault。
     ```sql
     CREATE STORAGE VAULT IF NOT EXISTS s3_vault
         PROPERTIES (
@@ -171,7 +150,21 @@ CREATE STORAGE VAULT [IF NOT EXISTS] vault
         "provider" = "S3"
         );
     ```
+7. 创建 MinIO S3 storage vault。
+   ```sql
+    CREATE STORAGE VAULT IF NOT EXISTS s3_vault
+        PROPERTIES (
+        "type"="S3",
+        "s3.endpoint"="127.0.0.1:9000",
+        "s3.access_key" = "ak",
+        "s3.secret_key" = "sk",
+        "s3.region" = "us-east-1",
+        "s3.root.path" = "ssb_sf1_p2_s3",
+        "s3.bucket" = "doris-build-1308700295",
+        "provider" = "S3"
+        );
+   ```
 
 ### 关键词
 
-    CREATE, STORAGE VAULT
\ No newline at end of file
+    CREATE, STORAGE VAULT
diff --git 
a/versioned_docs/version-3.0/compute-storage-decoupled/before-deployment.md 
b/versioned_docs/version-3.0/compute-storage-decoupled/before-deployment.md
index 57eee697749..b2785f7707c 100644
--- a/versioned_docs/version-3.0/compute-storage-decoupled/before-deployment.md
+++ b/versioned_docs/version-3.0/compute-storage-decoupled/before-deployment.md
@@ -1,7 +1,7 @@
 ---
 {
-    "title": "Doris Compute-Storage Decoupled Deployment Preparation",
-    "language": "en"
+  "title": "Doris Compute-Storage Decoupled Deployment Preparation",
+  "language": "en"
 }
 ---
 
@@ -64,11 +64,11 @@ When machine configurations are high, consider mixing FDB, 
FE, and Meta Service,
 
 ## 5. Installation Steps
 
-### 5.1. Install FoundationDB
+### 5.1 Install FoundationDB
 
 This section provides a step-by-step guide to configure, deploy, and start the 
FoundationDB (FDB) service using the provided scripts `fdb_vars.sh` and 
`fdb_ctl.sh`. You can download [doris 
tools](http://apache-doris-releases.oss-accelerate.aliyuncs.com/apache-doris-3.0.2-tools.tar.gz)
 and get `fdb_vars.sh` and `fdb_ctl.sh` from `fdb` directory.
 
-#### 5.1 Machine Requirements
+#### 5.1.1 Machine Requirements
 
 Typically, at least 3 machines equipped with SSDs are required to form a 
FoundationDB cluster with dual data replicas and allow for single machine 
failures.
 
@@ -76,7 +76,7 @@ Typically, at least 3 machines equipped with SSDs are 
required to form a Foundat
 If only for development/testing purposes, a single machine is sufficient.
 :::
 
-#### 5.2 `fdb_vars.sh` Configuration
+#### 5.1.2 `fdb_vars.sh` Configuration
 
 ##### Required Custom Settings
 
@@ -95,7 +95,7 @@ If only for development/testing purposes, a single machine is 
sufficient.
 | `MEMORY_LIMIT_GB` | Define the memory limit for FDB processes in GB | 
Integer | `MEMORY_LIMIT_GB=16` | Adjust this value based on available memory 
resources and FDB process requirements |
 | `CPU_CORES_LIMIT` | Define the CPU core limit for FDB processes | Integer | 
`CPU_CORES_LIMIT=8` | Set this value based on the number of available CPU cores 
and FDB process requirements |
 
-#### 5.3 Deploy FDB Cluster
+#### 5.1.3 Deploy FDB Cluster
 
 After configuring the environment with `fdb_vars.sh`, you can deploy the FDB 
cluster on each node using the `fdb_ctl.sh` script.
 
@@ -105,7 +105,7 @@ After configuring the environment with `fdb_vars.sh`, you 
can deploy the FDB clu
 
 This command initiates the deployment process of the FDB cluster.
 
-### 5.4 Start FDB Service
+### 5.1.4 Start FDB Service
 
 Once the FDB cluster is deployed, you can start the FDB service using the 
`fdb_ctl.sh` script.
 
@@ -115,11 +115,40 @@ Once the FDB cluster is deployed, you can start the FDB 
service using the `fdb_c
 
 This command starts the FDB service, making the cluster operational and 
obtaining the FDB cluster connection string, which can be used for configuring 
the MetaService.
 
-### 5.5 Install OpenJDK 17
+### 5.2 Install OpenJDK 17
 
 1. Download [OpenJDK 
17](https://download.java.net/java/GA/jdk17.0.1/2a2082e5a09d4267845be086888add4f/12/GPL/openjdk-17.0.1_linux-x64_bin.tar.gz)
 2. Extract and set the environment variable JAVA_HOME.
 
+### 5.3 Install S3 or HDFS (Optional)
+
+The Apache Doris (cloud mode) stores data on S3 or HDFS services. If you 
already have the relevant services, you can use them directly. If not, this 
document provides a simple deployment tutorial for MinIO:
+
+1. Choose the appropriate version and operating system on 在 MinIO MinIO's 
[download page](https://min.io/download?license=agpl&platform=linux) and 
download the corresponding binary or installation packages for the server and 
client.
+2. start MinIO Server
+   ```bash
+   export MINIO_REGION_NAME=us-east-1
+   export MINIO_ROOT_USER=minio # In older versions, the configuration is 
MINIO_ACCESS_KEY=minio
+   export MINIO_ROOT_PASSWORD=minioadmin # In older versions, the 
configuration is MINIO_SECRET_KEY=minioadmin
+   nohup ./minio server /mnt/data 2>&1 &
+   ```
+3. config MinIO Client
+   ```bash
+   # If you are using a client installed with an installation package, the 
client name is mcli. If you directly download the client binary package, its 
name is mc
+   ./mc config host add myminio http://127.0.0.1:9000 minio minioadmin
+   ```
+4. create a bucket
+   ```bash
+   ./mc mb myminio/doris
+   ```
+5. verify if it is working properly
+   ```bash
+   # upload a file
+   ./mc mv test_file myminio/doris
+   # list files
+   ./mc ls myminio/doris
+   ```
+
 ## 6. Next Steps
 
 After completing the above preparations, please refer to the following 
documents to continue the deployment:
diff --git 
a/versioned_docs/version-3.0/compute-storage-decoupled/compilation-and-deployment.md
 
b/versioned_docs/version-3.0/compute-storage-decoupled/compilation-and-deployment.md
index 022cb67b6dd..22a2ba5def9 100644
--- 
a/versioned_docs/version-3.0/compute-storage-decoupled/compilation-and-deployment.md
+++ 
b/versioned_docs/version-3.0/compute-storage-decoupled/compilation-and-deployment.md
@@ -298,4 +298,4 @@ SET <storage_vault_name> AS DEFAULT STORAGE VAULT
 ## 7. Notes
 
 - Only the Meta Service process for metadata operations should be configured 
as the `meta_service_endpoint` target for FE and BE.
-- The data recycling function process should not be configured as the 
`meta_service_endpoint` target.
\ No newline at end of file
+- The data recycling function process should not be configured as the 
`meta_service_endpoint` target.
diff --git 
a/versioned_docs/version-3.0/compute-storage-decoupled/managing-storage-vault.md
 
b/versioned_docs/version-3.0/compute-storage-decoupled/managing-storage-vault.md
index 2eb15f760f5..10c90558424 100644
--- 
a/versioned_docs/version-3.0/compute-storage-decoupled/managing-storage-vault.md
+++ 
b/versioned_docs/version-3.0/compute-storage-decoupled/managing-storage-vault.md
@@ -30,7 +30,7 @@ The Storage Vault is a remote shared storage used by Doris in 
a decoupled storag
 
 **Syntax**
 
-```SQL
+```sql
 CREATE STORAGE VAULT [IF NOT EXISTS] <vault_name>
 PROPERTIES
 ("key" = "value",...)
@@ -44,7 +44,7 @@ PROPERTIES
 
 To create an HDFS-based decoupled storage-compute Doris cluster, ensure that 
all nodes (including FE/BE nodes, Meta Service) have permission to access the 
specified HDFS, including completing Kerberos authorization configuration and 
connectivity checks in advance (which can be tested using Hadoop Client on each 
corresponding node).
 
-```SQL
+```sql
 CREATE STORAGE VAULT IF NOT EXISTS ssb_hdfs_vault
     PROPERTIES (
         "type"="hdfs",                                     -- required
@@ -59,7 +59,7 @@ CREATE STORAGE VAULT IF NOT EXISTS ssb_hdfs_vault
 
 **Creating an S3 Storage Vault**
 
-```SQL
+```sql
 CREATE STORAGE VAULT IF NOT EXISTS ssb_s3_vault
     PROPERTIES (
         "type"="S3",                                   -- required
@@ -73,6 +73,8 @@ CREATE STORAGE VAULT IF NOT EXISTS ssb_s3_vault
     );
 ```
 
+More parameter explanations and examples can be found in 
[CREATE-STORAGE-VAULT](../sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-STORAGE-VAULT.md).
+
 ### Viewing Storage Vaults
 
 **Syntax**
@@ -87,7 +89,7 @@ The returned result includes 4 columns: the Storage Vault 
name, Storage Vault ID
 
 **Syntax**
 
-```SQL
+```sql
 SET <vault_name> AS DEFAULT STORAGE VAULT
 ```
 
@@ -97,7 +99,7 @@ When creating a table, specify `storage_vault_name` in 
`PROPERTIES`, and the dat
 
 **Example**
 
-```SQL
+```sql
 CREATE TABLE IF NOT EXISTS supplier (
   s_suppkey int(11) NOT NULL COMMENT "",
   s_name varchar(26) NOT NULL COMMENT "",
@@ -133,7 +135,7 @@ Grant a specified MySQL user the usage permission for a 
certain Storage Vault, a
 
 **Syntax**
 
-```SQL
+```sql
 GRANT
     USAGE_PRIV
     ON STORAGE VAULT <vault_name>
@@ -147,7 +149,7 @@ Only Admin users have the authority to execute the `GRANT` 
statement, which is u
 
 **Example**
 
-```SQL
+```sql
 grant usage_priv on storage vault my_storage_vault to user1
 ```
 
@@ -155,7 +157,7 @@ Revoke the Storage Vault permissions of a specified MySQL 
user.
 
 **Syntax**
 
-```SQL
+```sql
 REVOKE 
     USAGE_PRIV
     ON STORAGE VAULT <vault_name>
@@ -166,6 +168,6 @@ Only Admin users have the authority to execute the `REVOKE` 
statement, which is
 
 **Example**
 
-```SQL
+```sql
 revoke usage_priv on storage vault my_storage_vault from user1
-```
\ No newline at end of file
+```
diff --git 
a/versioned_docs/version-3.0/sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-STORAGE-VAULT.md
 
b/versioned_docs/version-3.0/sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-STORAGE-VAULT.md
index c0694eb1c38..ebe51037ba6 100644
--- 
a/versioned_docs/version-3.0/sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-STORAGE-VAULT.md
+++ 
b/versioned_docs/version-3.0/sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-STORAGE-VAULT.md
@@ -40,65 +40,32 @@ CREATE STORAGE VAULT [IF NOT EXISTS] vault
 
 #### properties
 
-* `type`
-
-   Only two types of vaults are allowed: S3 and HDFS. -- Required
+| param  | is required | desc                                                  
 |
+|:-------|:------------|:-------------------------------------------------------|
+| `type` | required    | Only two types of vaults are allowed: `S3` and 
`HDFS`. |
 
 ##### S3 Vault
 
-* `s3.endpoint`
-
-   The endpoint used for object storage. **Notice**, please don't provide the 
endpoint with any `http://` or `https://`. And for Azure Blob Storage, the 
endpoint should be like `${ak}.blob.core.windows.net/`. -- Required
-
-* `s3.region`
-
-   The region of your bucket.(Not required when you'r using GCP or AZURE). -- 
Required
-
-* `s3.root.path`
-
-   The path where the data would be stored. -- Required
-
-* `s3.bucket`
-
-    The bucket of your object storage account. (StorageAccount if you're using 
Azure). -- Required
-
-* `s3.access_key`
-
-   The access key of your object storage account. (AccountName if you're using 
Azure). -- Required
-
-* `s3.secret_key`
-
-   The secret key of your object storage account. (AccountKey if you're using 
Azure). -- Required
-
-* `provider`
-
-   The cloud vendor which provides the object storage service. -- Required 
+| param           | is required | desc                                         
                                                                                
                                                                                
      |
+|:----------------|:------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| `s3.endpoint`   | required    | The endpoint used for object storage. 
<br/>**Notice**, please don't provide the endpoint with any `http://` or 
`https://`. And for Azure Blob Storage, the endpoint should be like 
`${ak}.blob.core.windows.net/`. |
+| `s3.region`     | required    | The region of your bucket.(Not required when 
you'r using GCP or AZURE).                                                      
                                                                         |
+| `s3.root.path`  | required    | The path where the data would be stored.     
                                                                                
                                           |
+| `s3.bucket`     | required    | The bucket of your object storage account. 
(StorageAccount if you're using Azure).                                         
                                                                                
       |
+| `s3.access_key` | required    | The access key of your object storage 
account. (AccountName if you're using Azure).                                   
                                                                                
             |
+| `s3.secret_key` | required    | The secret key of your object storage 
account. (AccountKey if you're using Azure).                                    
                                                                                
            |
+| `provider`      | required    | The cloud vendor which provides the object 
storage service. The supported values include `COS`, `OSS`, `S3`, `OBS`, `BOS`, 
`AZURE`, `GCP`                                                                  
                                                              |
 
 ##### HDFS Vault
 
-* `fs.defaultFS`
-
-   Hadoop configuration property that specifies the default file system to 
use. -- Required
-
-* `path_prefix`
-
-   The path prefix to where the data would be stored. -- optional. It would be 
the root_path of your Hadoop user if you don't provide any prefix.
-
-* `hadoop.username`
-
-   Hadoop configuration property that specifies the user accessing the file 
system. -- optional. It would be the user starting Hadoop process if you don't 
provide any user.
-
-* `hadoop.security.authentication`
-
-   The authentication way used for hadoop. -- optional. If you'd like to use 
kerberos you can provide with `kerboros`.
-
-* `hadoop.kerberos.principal`
-
-   The path to your kerberos principal. -- optional
-
-* `hadoop.kerberos.keytab`
-
-   The path to your kerberos keytab. -- optional
+| param                            | is required | desc                        
                                                                                
                                                 |
+|:---------------------------------|:------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| `fs.defaultFS`                   | required    | Hadoop configuration 
property that specifies the default file system to use.                         
                                                        |
+| `path_prefix`                    | optional    | The path prefix to where 
the data would be stored. It would be the root_path of your Hadoop user if you 
don't provide any prefix.                            |
+| `hadoop.username`                | optional    | Hadoop configuration 
property that specifies the user accessing the file system. It would be the 
user starting Hadoop process if you don't provide any user. |
+| `hadoop.security.authentication` | optional    | The authentication way used 
for hadoop. If you'd like to use kerberos you can provide with `kerboros`.      
                                                 |
+| `hadoop.kerberos.principal`      | optional    | The path to your kerberos 
principal.                                                       |
+| `hadoop.kerberos.keytab`         | optional    | The path to your kerberos 
keytab.                                                       |
 
 
 ### Example
@@ -185,7 +152,21 @@ CREATE STORAGE VAULT [IF NOT EXISTS] vault
         "provider" = "S3"
         );
     ```
+7. create a S3 storage vault using MinIO.
+   ```sql
+    CREATE STORAGE VAULT IF NOT EXISTS s3_vault
+        PROPERTIES (
+        "type"="S3",
+        "s3.endpoint"="127.0.0.1:9000",
+        "s3.access_key" = "ak",
+        "s3.secret_key" = "sk",
+        "s3.region" = "us-east-1",
+        "s3.root.path" = "ssb_sf1_p2_s3",
+        "s3.bucket" = "doris-build-1308700295",
+        "provider" = "S3"
+        );
+   ```
 
 ### Keywords
 
-    CREATE, STORAGE VAULT
\ No newline at end of file
+    CREATE, STORAGE VAULT


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org
For additional commands, e-mail: commits-h...@doris.apache.org

Reply via email to