This is an automated email from the ASF dual-hosted git repository.

kassiez pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/doris-website.git


The following commit(s) were added to refs/heads/master by this push:
     new e311ae6cc1e Shenxiaohu jira new (#1998)
e311ae6cc1e is described below

commit e311ae6cc1e209c9ac32b84b29e13eaeea920398
Author: cool-joker <jokerh...@outlook.com>
AuthorDate: Fri Apr 11 14:11:29 2025 +0800

    Shenxiaohu jira new (#1998)
    
    ## Versions
    
    - [x] dev
    - [x ] 3.0
    - [x] 2.1
    - [ ] 2.0
    
    ## Languages
    
    - [ ] Chinese
    - [x] English
    
    ## Docs Checklist
    
    - [ ] Checked by AI
    - [ ] Test Cases Built
---
 .../storage-management/ALTER-STORAGE-POLICY.md     |  46 +++--
 .../storage-management/CREATE-STORAGE-VAULT.md     | 110 +++++-----
 .../SET-DEFAULT-STORAGE-VAULT.md                   |  24 +--
 .../storage-management/SHOW-CACHE-HOTSPOT.md       |  35 ++--
 .../SHOW-STORAGE-POLICY-USING.md                   |  71 +++++++
 .../storage-management/SHOW-WARM-UP-JOB.md         |  22 +-
 .../storage-management/WARM-UP.md                  |  50 +++--
 .../SHOW-STORAGE-POLICY-USING.md                   |  73 +++++++
 .../storage-management/ALTER-STORAGE-VAULT.md      | 228 +++++----------------
 .../SHOW-STORAGE-POLICY-USING.md                   |  73 +++++++
 .../storage-management/ALTER-STORAGE-POLICY.md     |  46 +++--
 .../SHOW-STORAGE-POLICY-USING.md                   |  74 +++++++
 .../storage-management/ALTER-STORAGE-POLICY.md     |  46 +++--
 .../storage-management/CREATE-STORAGE-VAULT.md     | 108 +++++-----
 .../SET-DEFAULT-STORAGE-VAULT.md                   |  24 +--
 .../storage-management/SHOW-CACHE-HOTSPOT.md       |  33 ++-
 .../SHOW-STORAGE-POLICY-USING.md                   |  71 +++++++
 .../storage-management/SHOW-WARM-UP-JOB.md         |  25 +--
 .../storage-management/WARM-UP.md                  |  50 +++--
 19 files changed, 745 insertions(+), 464 deletions(-)

diff --git 
a/docs/sql-manual/sql-statements/cluster-management/storage-management/ALTER-STORAGE-POLICY.md
 
b/docs/sql-manual/sql-statements/cluster-management/storage-management/ALTER-STORAGE-POLICY.md
index b960ac4f0e8..32fa7cb4126 100644
--- 
a/docs/sql-manual/sql-statements/cluster-management/storage-management/ALTER-STORAGE-POLICY.md
+++ 
b/docs/sql-manual/sql-statements/cluster-management/storage-management/ALTER-STORAGE-POLICY.md
@@ -26,29 +26,51 @@ under the License.
 
 ## Description
 
-This statement is used to modify an existing cold and hot separation migration 
strategy. Only root or admin users can modify resources.
+This statement is used to modify an existing hot-cold tiered migration policy. 
Only root or admin users can modify resources.
 
+## Syntax
 ```sql
-ALTER STORAGE POLICY  'policy_name'
-PROPERTIES ("key"="value", ...);
+ALTER STORAGE POLICY  '<policy_name>' PROPERTIE ("<key>"="<value>"[, ... ]);
 ```
 
-## Example
+## Required Parameters
 
-1. Modify the name to coolown_datetime Cold and hot separation data migration 
time point:
+
+1.`<policy_name>`  
+> The name of the storage policy. This is the unique identifier of the storage 
policy you want to modify, and an existing policy name must be specified. 
+
+## Optional Parameters
+`PROPERTIE ("<key>"="<value>"[, ... ])` 
+
+1.`retention_days`  
+> Data retention period. Defines the duration for which the data is kept in 
storage. Data exceeding this period will be automatically deleted. 
+
+2.`redundancy_level`
+> Redundancy level. Defines the number of data replicas to ensure high 
availability and fault tolerance. For example, a value of 2 means each data 
block has two replicas. 
+
+3.`storage_type`   
+> Storage type. Specifies the storage medium used, such as SSD, HDD, or hybrid 
storage. This affects performance and cost. 
+
+4.`cooloff_time`    
+> Cool-off time. The time interval between when data is marked for deletion 
and when it is actually deleted. This helps prevent data loss due to accidental 
operations. 
+
+5.`location_policy` 
+> Geographical location policy. Defines the geographical placement of data, 
such as cross-region replication for disaster recovery. 
+
+## Examples
+
+1. Modify the cooldown_datetime for hot-cold tiered data migration:
 ```sql
 ALTER STORAGE POLICY has_test_policy_to_alter PROPERTIES("cooldown_datetime" = 
"2023-06-08 00:00:00");
 ```
-2. Modify the name to coolown_countdown of hot and cold separation data 
migration of ttl
+2. Modify the cooldown_ttl for hot-cold tiered data migration countdown:
+
 ```sql
 ALTER STORAGE POLICY has_test_policy_to_alter PROPERTIES ("cooldown_ttl" = 
"10000");
+```
+```sql
 ALTER STORAGE POLICY has_test_policy_to_alter PROPERTIES ("cooldown_ttl" = 
"1h");
-ALTER STORAGE POLICY has_test_policy_to_alter PROPERTIES ("cooldown_ttl" = 
"3d");
 ```
-## Keywords
-
 ```sql
-ALTER, STORAGE, POLICY
+ALTER STORAGE POLICY has_test_policy_to_alter PROPERTIES ("cooldown_ttl" = 
"3d");
 ```
-
-## Best Practice
diff --git 
a/docs/sql-manual/sql-statements/cluster-management/storage-management/CREATE-STORAGE-VAULT.md
 
b/docs/sql-manual/sql-statements/cluster-management/storage-management/CREATE-STORAGE-VAULT.md
index 6e4f5657929..49e84708864 100644
--- 
a/docs/sql-manual/sql-statements/cluster-management/storage-management/CREATE-STORAGE-VAULT.md
+++ 
b/docs/sql-manual/sql-statements/cluster-management/storage-management/CREATE-STORAGE-VAULT.md
@@ -26,57 +26,65 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-## CREATE-STORAGE-VAULT
+## Description
 
-### Description
+This command is used to create a storage vault. The topic of this document 
describes the syntax for creating a self-managed storage vault in Doris.
 
-This command is used to create a storage vault. The subject of this document 
describes the syntax for creating Doris self-maintained storage vault.
+
+## Syntax
 
 ```sql
-CREATE STORAGE VAULT [IF NOT EXISTS] vault
-[properties]
+CREATE STORAGE VAULT [IF NOT EXISTS] <vault_name> [ <properties> ]
 ```
 
 
-#### properties
+## Required Parameters
+
+| Parameter     | Description                     |
+|-------|-----------------------|
+| `<vault_name>` |  The name of the storage vault. This is the unique 
identifier for the new storage vault you are creating. |
 
-| param  | is required | desc                                                  
 |
-|:-------|:------------|:-------------------------------------------------------|
-| `type` | required    | Only two types of vaults are allowed: `S3` and 
`HDFS`. |
+## Optional Parameters
+| Parameter   | Description                                                    
     |
+|-------------------|--------------------------------------------------------------|
+| `[IF NOT EXISTS]` | If the specified storage vault already exists, the 
creation operation will not be executed, and no error will be thrown. This 
prevents duplicate creation of the same storage vault. |
+| `<properties>`    | A set of key-value pairs used to set or update specific 
properties of the storage vault. Each property consists of a key (<key>) and a 
value (<value>), separated by an equals sign (=). Multiple key-value pairs are 
separated by commas (,). |
 
-##### S3 Vault
+### S3 Vault
 
-| param           | is required | desc                                         
                                                                                
                                                                                
      |
-|:----------------|:------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| `s3.endpoint`    | required    | The endpoint used for object storage. 
<br/>**Notice**, please don't provide the endpoint with any `http://` or 
`https://`. And for Azure Blob Storage, the endpoint should be 
`blob.core.windows.net`. |
-| `s3.region`      | required    | The region of your bucket.(If you are using 
GCP or Azure, you can specify us-east-1).                                       
                                                                                
        |
-| `s3.root.path`   | required    | The path where the data would be stored.    
                                                                                
                                            |
-| `s3.bucket`      | required    | The bucket of your object storage account. 
(StorageAccount if you're using Azure).                                         
                                                                                
       |
-| `s3.access_key`  | required    | The access key of your object storage 
account. (AccountName if you're using Azure).                                   
                                                                                
             |
-| `s3.secret_key`  | required    | The secret key of your object storage 
account. (AccountKey if you're using Azure).                                    
                                                                                
            |
-| `provider`       | required    | The cloud vendor which provides the object 
storage service. The supported values include `COS`, `OSS`, `S3`, `OBS`, `BOS`, 
`AZURE`, `GCP`                                                                  
                                                              |
-| `use_path_style` | optional    | Indicate using `path-style URL`(private 
environment recommended) or `virtual-hosted-style URL`(public cloud 
recommended), default `true` (`path-style`)                                     
                                                                          |
+| Parameter              | Required | Description                              
                                                                        |
+|:----------------|:-----|:--------------------------------------------------------------------------------------------------------|
+| `s3.endpoint`    | Required   | The endpoint for object storage.
+Note: Do not provide a link starting with http:// or https://. For Azure Blob 
Storage, the endpoint is fixed as blob.core.windows.net.。 |
+| `s3.region`      | Required   | The region of your storage bucket. (Not 
required if using GCP or AZURE). |
+| `s3.root.path`   | Required   | The path to store data. |
+| `s3.bucket`      | Required   | The bucket of your object storage account. 
(For Azure, this is the StorageAccount). |
+| `s3.access_key`  | Required   | The access key for your object storage 
account. (For Azure, this is the AccountName). |
+| `s3.secret_key`  | Required   | The secret key for your object storage 
account. (For Azure, this is the AccountKey). |
+| `provider`       | Required   | The cloud provider offering the object 
storage service. Supported values are 
`COS`,`OSS`,`S3`,`OBS`,`BOS`,`AZURE`,`GCP` |
+| `use_path_style` | Optional   | Use `path-style URL (for private deployment 
environments) or `virtual-hosted-style URL`(recommended for public cloud 
environments). Default value is true (path-style).                              
                                                     |
 
-##### HDFS Vault
+### HDFS vault
 
-| param                            | is required | desc                        
                                                                                
                                                 |
-|:---------------------------------|:------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| `fs.defaultFS`                   | required    | Hadoop configuration 
property that specifies the default file system to use.                         
                                                        |
-| `path_prefix`                    | optional    | The path prefix to where 
the data would be stored. It would be the root_path of your Hadoop user if you 
don't provide any prefix.                            |
-| `hadoop.username`                | optional    | Hadoop configuration 
property that specifies the user accessing the file system. It would be the 
user starting Hadoop process if you don't provide any user. |
-| `hadoop.security.authentication` | optional    | The authentication way used 
for hadoop. If you'd like to use kerberos you can provide with `kerboros`.      
                                                 |
-| `hadoop.kerberos.principal`      | optional    | The path to your kerberos 
principal.                                                       |
-| `hadoop.kerberos.keytab`         | optional    | The path to your kerberos 
keytab.                                                       |
+| Parameter                               | Required | Description             
                                       |
+|:---------------------------------|:-----|:------------------------------------------------------|
+| `fs.defaultFS`                   |Required| Hadoop configuration property 
specifying the default file system to use.                             |
+| `path_prefix`                    |Optional| The prefix path for storing 
data. If not specified, the default path under the user account will be used.   
                |
+| `hadoop.username`                |Optional| Hadoop configuration property 
specifying the user to access the file system. If not specified, the user who 
started the Hadoop process will be used. |
+| `hadoop.security.authentication` |Optional| The authentication method for 
Hadoop. If you want to use Kerberos, you can specify kerberos.      |
+| `hadoop.kerberos.principal`      |Optional| The path to your Kerberos 
principal.      |
+| `hadoop.kerberos.keytab`         |Optional| The path to your Kerberos 
keytab.     |
 
-### Example
+## Examples
+
+1. Create HDFS storage vault。
 
-1. create a HDFS storage vault.
     ```sql
     CREATE STORAGE VAULT IF NOT EXISTS hdfs_vault_demo
     PROPERTIES (
         "type" = "hdfs",                                     -- required
         "fs.defaultFS" = "hdfs://127.0.0.1:8020",            -- required
-        "path_prefix" = "big/data",                          -- optional
+        "path_prefix" = "big/data",                          -- optional,  
generally fill in according to the business name
         "hadoop.username" = "user"                           -- optional
         "hadoop.security.authentication" = "kerberos"        -- optional
         "hadoop.kerberos.principal" = "hadoop/127.0.0.1@XXX" -- optional
@@ -84,7 +92,7 @@ CREATE STORAGE VAULT [IF NOT EXISTS] vault
     );
     ```
 
-2. create a S3 storage vault using OSS.
+2. Create OSS storage vault。
     ```sql
     CREATE STORAGE VAULT IF NOT EXISTS oss_demo_vault
     PROPERTIES (
@@ -96,11 +104,11 @@ CREATE STORAGE VAULT [IF NOT EXISTS] vault
         "s3.root.path" = "oss_demo_vault_prefix",            -- required
         "s3.bucket" = "xxxxxx",                              -- required,  
Your OSS bucket name
         "provider" = "OSS",                                  -- required
-        "use_path_style" = "false"                           -- optional,  OSS 
suggest setting `false`
+        "use_path_style" = "false"                           -- optional,  OSS 
recommended to set false
     );
     ```
 
-3. create a S3 storage vault using COS.
+3. Create COS storage vault。
     ```sql
     CREATE STORAGE VAULT IF NOT EXISTS cos_demo_vault
     PROPERTIES (
@@ -112,11 +120,11 @@ CREATE STORAGE VAULT [IF NOT EXISTS] vault
         "s3.root.path" = "cos_demo_vault_prefix",            -- required
         "s3.bucket" = "xxxxxx",                              -- required,  
Your COS bucket name
         "provider" = "COS",                                  -- required
-        "use_path_style" = "false"                           -- optional,  COS 
suggest setting `false`
+        "use_path_style" = "false"                           -- optional,  COS 
recommended to set false
     );
     ```
 
-4. create a S3 storage vault using OBS.
+4. Create OBS storage vault。
     ```sql
     CREATE STORAGE VAULT IF NOT EXISTS obs_demo_vault
     PROPERTIES (
@@ -128,11 +136,11 @@ CREATE STORAGE VAULT [IF NOT EXISTS] vault
         "s3.root.path" = "obs_demo_vault_prefix",            -- required
         "s3.bucket" = "xxxxxx",                              -- required,  
Your OBS bucket name
         "provider" = "OBS",                                  -- required
-        "use_path_style" = "false"                           -- optional,  OBS 
suggest setting `false`
+        "use_path_style" = "false"                           -- optional,  OBS 
recommended to set false
     );
     ```
 
-5. create a S3 storage vault using BOS.
+5. Create BOS storage vault。
     ```sql
     CREATE STORAGE VAULT IF NOT EXISTS obs_demo_vault
     PROPERTIES (
@@ -144,11 +152,11 @@ CREATE STORAGE VAULT [IF NOT EXISTS] vault
         "s3.root.path" = "bos_demo_vault_prefix",            -- required
         "s3.bucket" = "xxxxxx",                              -- required,  
Your BOS bucket name
         "provider" = "BOS",                                  -- required
-        "use_path_style" = "false"                           -- optional,  BOS 
suggest setting `false`
+        "use_path_style" = "false"                           -- optional,  BOS 
recommended to set false
     );
     ```
 
-6. create a S3 storage vault using AWS.
+6. Create S3 storage vault。
     ```sql
     CREATE STORAGE VAULT IF NOT EXISTS s3_demo_vault
     PROPERTIES (
@@ -160,10 +168,11 @@ CREATE STORAGE VAULT [IF NOT EXISTS] vault
         "s3.root.path" = "s3_demo_vault_prefix",            -- required
         "s3.bucket" = "xxxxxx",                             -- required,  Your 
S3 bucket name
         "provider" = "S3",                                  -- required
-        "use_path_style" = "false"                          -- optional,  S3 
suggest setting `false`
+        "use_path_style" = "false"                          -- optional,  S3 
recommended to set false
     );
     ```
-7. create a S3 storage vault using MinIO.
+
+7. Create  MinIO storage vault。
    ```sql
     CREATE STORAGE VAULT IF NOT EXISTS minio_demo_vault
     PROPERTIES (
@@ -175,11 +184,11 @@ CREATE STORAGE VAULT [IF NOT EXISTS] vault
         "s3.root.path" = "minio_demo_vault_prefix",        -- required
         "s3.bucket" = "xxxxxx",                            -- required,  Your 
minio bucket name
         "provider" = "S3",                                 -- required
-        "use_path_style" = "true"                          -- required,  minio 
suggest setting `true`
+        "use_path_style" = "true"                          -- required,  minio 
recommended to set false
     );
    ```
 
-8. create a S3 storage vault using AZURE.
+8. Create AZURE storage vault。
     ```sql
     CREATE STORAGE VAULT IF NOT EXISTS azure_demo_vault
     PROPERTIES (
@@ -194,7 +203,7 @@ CREATE STORAGE VAULT [IF NOT EXISTS] vault
     );
     ```
 
-9. create a S3 storage vault using GCP.
+9. Create GCP storage vault。
     ```sql
     CREATE STORAGE VAULT IF NOT EXISTS gcp_demo_vault
     PROPERTIES (
@@ -209,12 +218,3 @@ CREATE STORAGE VAULT [IF NOT EXISTS] vault
     );
     ```
 
-**Note**
-
-[The s3.access_key corresponds to the Access ID of the GCP HMAC 
key](https://cloud.google.com/storage/docs/authentication/hmackeys)
-
-[The s3.secret_key corresponds to the Secret of the GCP HMAC 
key](https://cloud.google.com/storage/docs/authentication/hmackeys)
-
-### Keywords
-
-    CREATE, STORAGE VAULT
diff --git 
a/docs/sql-manual/sql-statements/cluster-management/storage-management/SET-DEFAULT-STORAGE-VAULT.md
 
b/docs/sql-manual/sql-statements/cluster-management/storage-management/SET-DEFAULT-STORAGE-VAULT.md
index 4a087444771..340c9c2595d 100644
--- 
a/docs/sql-manual/sql-statements/cluster-management/storage-management/SET-DEFAULT-STORAGE-VAULT.md
+++ 
b/docs/sql-manual/sql-statements/cluster-management/storage-management/SET-DEFAULT-STORAGE-VAULT.md
@@ -26,28 +26,28 @@ under the License.
 
 ## Description
 
-This statement is used to set the default storage vault in Doris. The default 
storage vault is used to store data for internal or system tables. If the 
default storage vault is not set, Doris will not function properly. Once the 
default storage vault is set, it cannot be removed.
+This statement is used to set the default storage vault in Doris. The default 
storage vault is used to store data for internal or system tables. If the 
default storage vault is not set, Doris will not be able to operate normally. 
Once a default storage vault is set, it cannot be removed.
+
 
 ## Syntax
 
 ```sql
-SET vault_name DEFAULT STORAGE VAULT
+SET <vault_name> DEFAULT STORAGE VAULT
 ```
 
-> Note:
->
+## Required Parameters
+
+| Parameter Name          | Description                                        
                 |
+|-------------------|--------------------------------------------------------------|
+| `<vault_name>`    | The name of the storage vault. This is the unique 
identifier of the vault you want to set as the default storage vault.           
|
+
+## Usage Notes:
 > 1. Only ADMIN users can set the default storage vault.
 
-## Example
+## Examples
 
-1. Set the storage vault named 's3_vault' as the default storage vault.
+1. Set the storage vault named s3_vault as the default storage vault.
 
    ```sql
    SET s3_vault AS DEFAULT STORAGE VAULT;
    ```
-
-## Related Commands
-
-## Keywords
-
-    SET, DEFAULT, STORAGE, VAULT
\ No newline at end of file
diff --git 
a/docs/sql-manual/sql-statements/cluster-management/storage-management/SHOW-CACHE-HOTSPOT.md
 
b/docs/sql-manual/sql-statements/cluster-management/storage-management/SHOW-CACHE-HOTSPOT.md
index 74f2cc53f9d..c15b973ead2 100644
--- 
a/docs/sql-manual/sql-statements/cluster-management/storage-management/SHOW-CACHE-HOTSPOT.md
+++ 
b/docs/sql-manual/sql-statements/cluster-management/storage-management/SHOW-CACHE-HOTSPOT.md
@@ -27,44 +27,45 @@ under the License.
 
 ## Description
 
+
 This statement is used to display the hotspot information of the file cache.
 
 :::info Note
 
 Before version 3.0.4, you could use the `SHOW CACHE HOTSPOT` statement to 
query cache hotspot information statistics. Starting from version 3.0.4, the 
use of the `SHOW CACHE HOTSPOT` statement for cache hotspot information 
statistics is no longer supported. Please directly access the system table 
`__internal_schema.cloud_cache_hotspot` for queries. For detailed usage, refer 
to [MANAGING FILE CACHE](../../../../compute-storage-decoupled/file-cache). 
 
-:::
+
 
 ## Syntax
 
 
 ```sql
-   SHOW CACHE HOTSPOT '/[<compute_group_name>/<db.table_name>]';
+   SHOW CACHE HOTSPOT '/[<compute_group_name>/<table_name>]';
 ```
 
 ## Parameters
 
-| Parameter Name         | Description                    |
-| ---------------------- | ------------------------------ |
-| `<compute_group_name>` | The name of the compute group. |
-| `<table_name>`         | The name of the table.         |
 
+| Parameter Name                         | Description                         
                                |
+|---------------------------|--------------------------------------------------------------|
+| <compute_group_name>        | The name of the compute group.                 
                              |
+| <table_name>                | The name of the table.                         
                          |
 ## Examples
 
-1. Display the cache hotspot information for the entire system.
+1. Display cache hot spot information for the entire system:
 
-    ```sql
-    SHOW CACHE HOTSPOT '/';
-    ```
-
-2. Display the cache hotspot information for a specific compute group named 
`my_compute_group`.
+```sql
+SHOW CACHE HOTSPOT '/';
+```
 
+2. Display cache hot spot information for a specific compute group 
my_compute_group:
 
-    ```sql
-    SHOW CACHE HOTSPOT '/my_compute_group/';
-    ```
+```sql
+SHOW CACHE HOTSPOT '/my_compute_group/';
+```
 
 ## References
 
-- [MANAGING FILE CACHE](../../../../compute-storage-decoupled/file-cache)
-- [WARMUP CACHE](./WARM-UP)
+- [WARMUP 
CACHE](../Database-Administration-Statements/WARM-UP-COMPUTE-GROUP.md)
+- [MANAGING FILE CACHE](../../../compute-storage-decoupled/file-cache.md)
+
diff --git 
a/docs/sql-manual/sql-statements/cluster-management/storage-management/SHOW-STORAGE-POLICY-USING.md
 
b/docs/sql-manual/sql-statements/cluster-management/storage-management/SHOW-STORAGE-POLICY-USING.md
new file mode 100644
index 00000000000..7c5f8278acb
--- /dev/null
+++ 
b/docs/sql-manual/sql-statements/cluster-management/storage-management/SHOW-STORAGE-POLICY-USING.md
@@ -0,0 +1,71 @@
+---
+{
+    "title": "SHOW STORAGE POLICY USING",
+    "language": "en"
+}
+---
+
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+## Description
+
+View all tables and partitions associated with a specified storage policy.
+
+## Syntax
+
+```sql
+SHOW STORAGE POLICY USING [FOR <some_policy>]
+```
+## Optional Parameters
+| Parameter Name          | Description                                        
                 |
+|-------------------|--------------------------------------------------------------|
+| `<policy_name>` | Specifies the name of the storage policy to query. If 
provided, only the details of the specified storage policy will be displayed; 
if not provided, information for all storage policies will be shown. |
+
+## Examples
+
+1. View all objects with a storage policy enabled
+   ```sql
+   show storage policy using;
+   ```
+   ```sql
+   
+-----------------------+-----------------------------------------+----------------------------------------+------------+
+   | PolicyName            | Database                                | Table   
                               | Partitions |
+   
+-----------------------+-----------------------------------------+----------------------------------------+------------+
+   | test_storage_policy   | regression_test_cold_heat_separation_p2 | 
table_with_storage_policy_1            | ALL        |
+   | test_storage_policy   | regression_test_cold_heat_separation_p2 | 
partition_with_multiple_storage_policy | p201701    |
+   | test_storage_policy_2 | regression_test_cold_heat_separation_p2 | 
partition_with_multiple_storage_policy | p201702    |
+   | test_storage_policy_2 | regression_test_cold_heat_separation_p2 | 
table_with_storage_policy_2            | ALL        |
+   | test_policy           | db2                                     | 
db2_test_1                             | ALL        |
+   
+-----------------------+-----------------------------------------+----------------------------------------+------------+
+   ```
+2. View objects that use the storage policy test_storage_policy
+
+    ```sql
+    show storage policy using for test_storage_policy;
+    ```
+    ```sql
+    
+---------------------+-----------+---------------------------------+------------+
+    | PolicyName          | Database  | Table                           | 
Partitions |
+    
+---------------------+-----------+---------------------------------+------------+
+    | test_storage_policy | db_1      | partition_with_storage_policy_1 | 
p201701    |
+    | test_storage_policy | db_1      | table_with_storage_policy_1     | ALL  
      |
+    
+---------------------+-----------+---------------------------------+------------+
+   ```
+
diff --git 
a/docs/sql-manual/sql-statements/cluster-management/storage-management/SHOW-WARM-UP-JOB.md
 
b/docs/sql-manual/sql-statements/cluster-management/storage-management/SHOW-WARM-UP-JOB.md
index e52bb151365..ed4653de3b0 100644
--- 
a/docs/sql-manual/sql-statements/cluster-management/storage-management/SHOW-WARM-UP-JOB.md
+++ 
b/docs/sql-manual/sql-statements/cluster-management/storage-management/SHOW-WARM-UP-JOB.md
@@ -36,17 +36,19 @@ The commands are used to display warm-up jobs in Doris.
 
 ## Parameters
 
-* id : id of the warm-up job.
 
-## Example
+| Parameter Name                  | Description                                
                         |
+|---------------------------|--------------------------------------------------------------|
+| id                        | The ID of the warm-up job                        
                        |
+## Examples
 
-1. View all warmup job
+1. View all warm-up jobs:
 
-    ```sql
+ ```sql
     SHOW WARM UP JOB;
-    ```
+```
 
-2. View one warmup job with id = 13418
+2. View the warm-up job with ID 13418:
 
 ```sql
     SHOW WARM UP JOB WHERE id = 13418;
@@ -56,11 +58,3 @@ The commands are used to display warm-up jobs in Doris.
 
  - [WARMUP COMPUTE GROUP](./WARM-UP.md)
 
-## References
-
- - [MANAGING FILE CACHE](../../../../compute-storage-decoupled/file-cache)
-
-## Keywords
-
-    SHOW, CACHE, HOTSPOT, COMPUTE GROUP 
-
diff --git 
a/docs/sql-manual/sql-statements/cluster-management/storage-management/WARM-UP.md
 
b/docs/sql-manual/sql-statements/cluster-management/storage-management/WARM-UP.md
index e5b504540b3..c2f39f6cc55 100644
--- 
a/docs/sql-manual/sql-statements/cluster-management/storage-management/WARM-UP.md
+++ 
b/docs/sql-manual/sql-statements/cluster-management/storage-management/WARM-UP.md
@@ -26,56 +26,52 @@ under the License.
 
 ## Description
 
-The WARM UP COMPUTE GROUP statement is used to warm up data in a compute group 
to improve query performance. The warming operation can either retrieve 
resources from another compute group or specify particular tables and 
partitions for warming. The warming operation returns a job ID that can be used 
to track the status of the warming job.
+The `WARM UP COMPUTE GROUP` statement is used to warm up data in a compute 
group to improve query performance. The warm-up operation can fetch resources 
from another compute group or specify particular tables and partitions for 
warming up. The warm-up operation returns a job ID that can be used to track 
the status of the warm-up job.
+
 
 ## Syntax
 
 ```sql
-WARM UP COMPUTE GROUP <destination_compute_group_name> WITH COMPUTE GROUP 
<source_compute_group_name>;
-
+WARM UP COMPUTE GROUP <destination_compute_group_name> WITH COMPUTE GROUP 
<source_compute_group_name> FORCE;
+```
+```sql
 WARM UP COMPUTE GROUP <destination_compute_group_name> WITH <warm_up_list>;
-
+```
+```sql
 warm_up_list ::= warm_up_item [AND warm_up_item...];
-
+```
+```sql
 warm_up_item ::= TABLE <table_name> [PARTITION <partition_name>];
 
 ```
-
 ## Parameters
 
-* destination_compute_group_name: The name of the destination compute group 
that is to be warmed up.
+| Parameter Name                  | Description                                
                         |
+|---------------------------|--------------------------------------------------------------|
+| destination_compute_group_name | The name of the target compute group to be 
warmed up.                                   |
+| source_compute_group_name  | The name of the source compute group from which 
resources are obtained.                                 |
+| warm_up_list              | A list of specific items to be warmed up, which 
can include tables and partitions.                   |
+| table_name                | The name of the table used for warming up.       
                                  |
+| partition_name            | The name of the partition used for warming up.   
                                    |
 
-* source_compute_group_name(Optional) The name of the source cluster from 
which resources will be warmed up.
+## Return Value
 
-* warm_up_list: (Optional) A list of specific items to warm up, which can 
include tables and partitions.
+* JobId:  The ID of the warm-up job.
 
-* table_name: The name of the table is used to warmup.
+## Examples
 
-* partition_name: The name of the partition is used to warmup.
-
-## Return Values
-
-* JobId: the id of warm-up job.
-
-## Example
-
-1. Warm up a compute group named destination_group_name with a compute group 
named source_group_name.
+1. Warm up the compute group named destination_group_name using the compute 
group named source_group_name
 
 ```sql
    WARM UP COMPUTE GROUP destination_group_name WITH COMPUTE GROUP 
source_group_name;
-
 ```
 
-2. Warm up a compute group named destination_group with table sales_data and 
customer_info and partition q1_2024 of table orders .
+2. Warm up the tables sales_data and customer_info, and the partition q1_2024 
of the table orders using the compute group named destination_group.
 
-```
+```sql
     WARM UP COMPUTE GROUP destination_group WITH 
         TABLE sales_data 
         AND TABLE customer_info 
         AND TABLE orders PARTITION q1_2024;
 
-```
-
-## Keywords
-
-    WARM UP, COMPUTE GROUP, CACHE
+```
\ No newline at end of file
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/sql-manual/sql-statements/cluster-management/storage-management/SHOW-STORAGE-POLICY-USING.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/sql-manual/sql-statements/cluster-management/storage-management/SHOW-STORAGE-POLICY-USING.md
new file mode 100644
index 00000000000..1eb53d07321
--- /dev/null
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/sql-manual/sql-statements/cluster-management/storage-management/SHOW-STORAGE-POLICY-USING.md
@@ -0,0 +1,73 @@
+---
+{
+   "title": "SHOW STORAGE POLICY USING",
+   "language": "zh-CN"
+}
+---
+
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+
+## 描述
+
+查看所有/指定存储策略关联的表和分区
+
+## 语法
+
+```sql
+SHOW STORAGE POLICY USING [FOR <some_policy>]
+```
+## 可选参数
+| 参数名称          | 描述                                                         |
+|-------------------|--------------------------------------------------------------|
+| `<policy_name>` | 
指定要查询的存储策略名称。如果提供了此参数,则只显示指定存储策略的详细信息;如果不提供此参数,则显示所有存储策略的信息。 |
+
+## 示例
+
+1. 查看所有启用了存储策略的对象
+   ```sql
+   show storage policy using;
+   ```
+   ```sql
+   
+-----------------------+-----------------------------------------+----------------------------------------+------------+
+   | PolicyName            | Database                                | Table   
                               | Partitions |
+   
+-----------------------+-----------------------------------------+----------------------------------------+------------+
+   | test_storage_policy   | regression_test_cold_heat_separation_p2 | 
table_with_storage_policy_1            | ALL        |
+   | test_storage_policy   | regression_test_cold_heat_separation_p2 | 
partition_with_multiple_storage_policy | p201701    |
+   | test_storage_policy_2 | regression_test_cold_heat_separation_p2 | 
partition_with_multiple_storage_policy | p201702    |
+   | test_storage_policy_2 | regression_test_cold_heat_separation_p2 | 
table_with_storage_policy_2            | ALL        |
+   | test_policy           | db2                                     | 
db2_test_1                             | ALL        |
+   
+-----------------------+-----------------------------------------+----------------------------------------+------------+
+   ```
+2. 查看使用存储策略 test_storage_policy 的对象
+
+    ```sql
+    show storage policy using for test_storage_policy;
+    ```
+    ```sql
+    
+---------------------+-----------+---------------------------------+------------+
+    | PolicyName          | Database  | Table                           | 
Partitions |
+    
+---------------------+-----------+---------------------------------+------------+
+    | test_storage_policy | db_1      | partition_with_storage_policy_1 | 
p201701    |
+    | test_storage_policy | db_1      | table_with_storage_policy_1     | ALL  
      |
+    
+---------------------+-----------+---------------------------------+------------+
+   ```
+
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/sql-manual/sql-statements/cluster-management/storage-management/ALTER-STORAGE-VAULT.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/sql-manual/sql-statements/cluster-management/storage-management/ALTER-STORAGE-VAULT.md
index de74cb765cd..889bb331743 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/sql-manual/sql-statements/cluster-management/storage-management/ALTER-STORAGE-VAULT.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/sql-manual/sql-statements/cluster-management/storage-management/ALTER-STORAGE-VAULT.md
@@ -26,189 +26,61 @@ under the License.
 
 ## 描述
 
-该命令用于创建存储库。本文档的主题描述了创建 Doris 自管理存储库的语法。
-
+更改 Storage Vault 的可修改属性值
 
 ## 语法
 
 ```sql
-CREATE STORAGE VAULT [IF NOT EXISTS] <`vault_name`> [ <`properties`> ]
+ALTER STORAGE VAULT <storage_vault_name>
+PROPERTIES (<storage_vault_property>)
 ```
 
 ## 必选参数
 
-| 参数     | 描述                     |
-|-------|-----------------------|
-| `vault_name` |  存储库的名称。这是您要创建的新存储库的唯一标识符。 |
-
-## 可选参数
-| 参数   | 描述                                                         |
-|-------------------|--------------------------------------------------------------|
-| `[IF NOT EXISTS]` | 如果指定的存储库已经存在,则不会执行创建操作,并且不会抛出错误。这可以防止重复创建相同的存储库。 |
-| `PROPERTIES`      | 
一组键值对,用来设置或更新存储库的具体属性。每个属性由键(`<key>`)和值(`<value>`)组成,并用等号 (`=`) 分隔。多个键值对之间用逗号 
(`,`) 分隔。 |
-
-### S3 Vault
-
-| 参数              | 是否必需 | 描述                                                  
                                                    |
-|:----------------|:-----|:--------------------------------------------------------------------------------------------------------|
-| `s3.endpoint`    | 必需   | 用于对象存储的端点。<br/>注意,请不要提供带有 http:// 或 https:// 
开头的链接。对于 Azure Blob 存储,endpoint 是固定的 blob.core.windows.net。 |
-| `s3.region`      | 必需   | 您的存储桶的区域。(如果您使用 GCP 或 AZURE,则不需要)。 |
-| `s3.root.path`   | 必需   | 存储数据的路径。 |
-| `s3.bucket`      | 必需   | 您的对象存储账户的存储桶。(如果您使用 Azure,则为 StorageAccount)。 |
-| `s3.access_key`  | 必需   | 您的对象存储账户的访问密钥。(如果您使用 Azure,则为 AccountName)。 |
-| `s3.secret_key`  | 必需   | 您的对象存储账户的秘密密钥。(如果您使用 Azure,则为 AccountKey)。 |
-| `provider`       | 必需   | 
提供对象存储服务的云供应商。支持的值有`COS`,`OSS`,`S3`,`OBS`,`BOS`,`AZURE`,`GCP` |
-| `use_path_style` | 可选   | 使用 `path-style URL`(私有化部署环境) 
或者`virtual-hosted-style URL`(公有云环境建议), 默认值 `true` (path-style)                  
                                                                    |
-
-### HDFS vault
-
-| 参数                               | 是否必需 | 描述                                 
                   |
-|:---------------------------------|:-----|:------------------------------------------------------|
-| `fs.defaultFS`                   |必需 | Hadoop 配置属性,指定要使用的默认文件系统。             
                |
-| `path_prefix`                    |可选 | 存储数据的路径前缀。如果没有指定则会使用 user 账户下的默认路径。   
                |
-| `hadoop.username`                |可选 | Hadoop 配置属性,指定访问文件系统的用户。如果没有指定则会使用启动 
hadoop 进程的 user。 |
-| `hadoop.security.authentication` |可选 | 用于 hadoop 的认证方式。如果希望使用 kerberos 
则可以填写`kerberos`。      |
-| `hadoop.kerberos.principal`      |可选 | 您的 kerberos 主体的路径。      |
-| `hadoop.kerberos.keytab`         |可选 | 您的 kerberos keytab 的路径。      |
-
-## 举例
-
-1. 创建 HDFS storage vault。
-    ```sql
-    CREATE STORAGE VAULT IF NOT EXISTS hdfs_vault_demo
-    PROPERTIES (
-        "type" = "hdfs",                                     -- required
-        "fs.defaultFS" = "hdfs://127.0.0.1:8020",            -- required
-        "path_prefix" = "big/data",                          -- optional,  
一般按照业务名称填写
-        "hadoop.username" = "user"                           -- optional
-        "hadoop.security.authentication" = "kerberos"        -- optional
-        "hadoop.kerberos.principal" = "hadoop/127.0.0.1@XXX" -- optional
-        "hadoop.kerberos.keytab" = "/etc/emr.keytab"         -- optional
-    );
-    ```
-
-2. 创建阿里云 OSS storage vault。
-    ```sql
-    CREATE STORAGE VAULT IF NOT EXISTS oss_demo_vault
-    PROPERTIES (
-        "type" = "S3",                                       -- required
-        "s3.endpoint" = "oss-cn-beijing.aliyuncs.com",       -- required
-        "s3.access_key" = "xxxxxx",                          -- required,  
Your OSS access key
-        "s3.secret_key" = "xxxxxx",                          -- required,  
Your OSS secret key
-        "s3.region" = "cn-beijing",                          -- required
-        "s3.root.path" = "oss_demo_vault_prefix",            -- required
-        "s3.bucket" = "xxxxxx",                              -- required,  
Your OSS bucket name
-        "provider" = "OSS",                                  -- required
-        "use_path_style" = "false"                           -- optional,  OSS 
建议设置 false
-    );
-    ```
-
-3. 创建腾讯云 COS storage vault。
-    ```sql
-    CREATE STORAGE VAULT IF NOT EXISTS cos_demo_vault
-    PROPERTIES (
-        "type" = "S3",
-        "s3.endpoint" = "cos.ap-guangzhou.myqcloud.com",     -- required
-        "s3.access_key" = "xxxxxx",                          -- required,  
Your COS access key
-        "s3.secret_key" = "xxxxxx",                          -- required,  
Your COS secret key
-        "s3.region" = "ap-guangzhou",                        -- required
-        "s3.root.path" = "cos_demo_vault_prefix",            -- required
-        "s3.bucket" = "xxxxxx",                              -- required,  
Your COS bucket name
-        "provider" = "COS",                                  -- required
-        "use_path_style" = "false"                           -- optional,  COS 
建议设置 false
-    );
-    ```
-
-4. 创建华为云 OBS storage vault。
-    ```sql
-    CREATE STORAGE VAULT IF NOT EXISTS obs_demo_vault
-    PROPERTIES (
-        "type" = "S3",                                       -- required
-        "s3.endpoint" = "obs.cn-north-4.myhuaweicloud.com",  -- required
-        "s3.access_key" = "xxxxxx",                          -- required,  
Your OBS access key
-        "s3.secret_key" = "xxxxxx",                          -- required,  
Your OBS secret key
-        "s3.region" = "cn-north-4",                          -- required
-        "s3.root.path" = "obs_demo_vault_prefix",            -- required
-        "s3.bucket" = "xxxxxx",                              -- required,  
Your COS bucket name
-        "provider" = "OBS",                                  -- required
-        "use_path_style" = "false"                           -- optional,  OBS 
建议设置 false
-    );
-    ```
-
-5. 创建百度云 BOS storage vault。
-    ```sql
-    CREATE STORAGE VAULT IF NOT EXISTS obs_demo_vault
-    PROPERTIES (
-        "type" = "S3",                                       -- required
-        "s3.endpoint" = "s3.bj.bcebos.com",                  -- required
-        "s3.access_key" = "xxxxxx",                          -- required,  
Your BOS access key
-        "s3.secret_key" = "xxxxxx",                          -- required,  
Your BOS secret key
-        "s3.region" = "bj",                                  -- required
-        "s3.root.path" = "bos_demo_vault_prefix",            -- required
-        "s3.bucket" = "xxxxxx",                              -- required,  
Your BOS bucket name
-        "provider" = "BOS",                                  -- required
-        "use_path_style" = "false"                           -- optional,  BOS 
建议设置 false
-    );
-    ```
-
-6. 创建亚马逊云 S3 storage vault。
-    ```sql
-    CREATE STORAGE VAULT IF NOT EXISTS s3_demo_vault
-    PROPERTIES (
-        "type" = "S3",                                      -- required
-        "s3.endpoint" = "s3.us-east-1.amazonaws.com",       -- required
-        "s3.access_key" = "xxxxxx",                         -- required,  Your 
S3 access key
-        "s3.secret_key" = "xxxxxx",                         -- required,  Your 
OBS secret key
-        "s3.region" = "us-east-1",                          -- required
-        "s3.root.path" = "s3_demo_vault_prefix",            -- required
-        "s3.bucket" = "xxxxxx",                             -- required,  Your 
s3 bucket name
-        "provider" = "S3",                                  -- required
-        "use_path_style" = "false"                          -- optional,  S3 
建议设置 false
-    );
-    ```
-
-7. 创建 MinIO storage vault。
-   ```sql
-    CREATE STORAGE VAULT IF NOT EXISTS minio_demo_vault
-    PROPERTIES (
-        "type" = "S3",                                     -- required
-        "s3.endpoint" = "127.0.0.1:9000",                  -- required
-        "s3.access_key" = "xxxxxx",                        -- required,  Your 
minio access key
-        "s3.secret_key" = "xxxxxx",                        -- required,  Your 
minio secret key
-        "s3.region" = "us-east-1",                         -- required
-        "s3.root.path" = "minio_demo_vault_prefix",        -- required
-        "s3.bucket" = "xxxxxx",                            -- required,  Your 
minio bucket name
-        "provider" = "S3",                                 -- required
-        "use_path_style" = "true"                          -- required,  minio 
建议设置 true
-    );
-   ```
-
-8. 创建微软 AZURE storage vault。
-    ```sql
-    CREATE STORAGE VAULT IF NOT EXISTS azure_demo_vault
-    PROPERTIES (
-        "type" = "S3",                                       -- required
-        "s3.endpoint" = "blob.core.windows.net",             -- required
-        "s3.access_key" = "xxxxxx",                          -- required,  
Your Azure AccountName
-        "s3.secret_key" = "xxxxxx",                          -- required,  
Your Azure AccountKey
-        "s3.region" = "us-east-1",                           -- required
-        "s3.root.path" = "azure_demo_vault_prefix",          -- required
-        "s3.bucket" = "xxxxxx",                              -- required,  
Your Azure StorageAccount
-        "provider" = "AZURE"                                 -- required
-    );
-    ```
-
-9. 创建谷歌 GCP storage vault。
-    ```sql
-    CREATE STORAGE VAULT IF NOT EXISTS gcp_demo_vault
-    PROPERTIES (
-        "type" = "S3",                                       -- required
-        "s3.endpoint" = "storage.googleapis.com",            -- required
-        "s3.access_key" = "xxxxxx",                          -- required
-        "s3.secret_key" = "xxxxxx",                          -- required
-        "s3.region" = "us-east-1",                           -- required
-        "s3.root.path" = "gcp_demo_vault_prefix",            -- required
-        "s3.bucket" = "xxxxxx",                              -- required
-        "provider" = "GCP"                                   -- required
-    );
-    ```
+**<storage_vault_property>**
+
+> - type:可选值为 s3, hdfs
+>
+> 
+>
+> 当 type 为 s3 时,允许出现的属性字段如下:
+>
+> - s3.access_key:s3 vault 的 ak
+> - s3.secret_key:s3 vault 的 sk
+> - vault_name:vault 的 名字。
+> - use_path_style:是否允许 path style url,可选值为 true,false。默认值是 false。
+>
+> 
+>
+> 当 type 为 hdfs 时,禁止出现的字段:
+>
+> - path_prefix:存储路径前缀
+> - fs.defaultFS:hdfs name
+
+## 权限控制
+
+执行此 SQL 命令的用户必须至少具有 ADMIN_PRIV 权限。
+
+## 示例
+
+修改 s3 storage vault ak
+
+```sql
+ALTER STORAGE VAULT old_vault_name
+PROPERTIES (
+  "type"="S3",
+  "VAULT_NAME" = "new_vault_name",
+   "s3.access_key" = "new_ak"
+);
+```
+
+修改 hdfs storage vault 
+
+```sql
+ALTER STORAGE VAULT old_vault_name
+PROPERTIES (
+  "type"="hdfs",
+  "VAULT_NAME" = "new_vault_name",
+  "hadoop.username" = "hdfs"
+);
+```
\ No newline at end of file
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/sql-manual/sql-statements/cluster-management/storage-management/SHOW-STORAGE-POLICY-USING.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/sql-manual/sql-statements/cluster-management/storage-management/SHOW-STORAGE-POLICY-USING.md
new file mode 100644
index 00000000000..1eb53d07321
--- /dev/null
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/sql-manual/sql-statements/cluster-management/storage-management/SHOW-STORAGE-POLICY-USING.md
@@ -0,0 +1,73 @@
+---
+{
+   "title": "SHOW STORAGE POLICY USING",
+   "language": "zh-CN"
+}
+---
+
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+
+## 描述
+
+查看所有/指定存储策略关联的表和分区
+
+## 语法
+
+```sql
+SHOW STORAGE POLICY USING [FOR <some_policy>]
+```
+## 可选参数
+| 参数名称          | 描述                                                         |
+|-------------------|--------------------------------------------------------------|
+| `<policy_name>` | 
指定要查询的存储策略名称。如果提供了此参数,则只显示指定存储策略的详细信息;如果不提供此参数,则显示所有存储策略的信息。 |
+
+## 示例
+
+1. 查看所有启用了存储策略的对象
+   ```sql
+   show storage policy using;
+   ```
+   ```sql
+   
+-----------------------+-----------------------------------------+----------------------------------------+------------+
+   | PolicyName            | Database                                | Table   
                               | Partitions |
+   
+-----------------------+-----------------------------------------+----------------------------------------+------------+
+   | test_storage_policy   | regression_test_cold_heat_separation_p2 | 
table_with_storage_policy_1            | ALL        |
+   | test_storage_policy   | regression_test_cold_heat_separation_p2 | 
partition_with_multiple_storage_policy | p201701    |
+   | test_storage_policy_2 | regression_test_cold_heat_separation_p2 | 
partition_with_multiple_storage_policy | p201702    |
+   | test_storage_policy_2 | regression_test_cold_heat_separation_p2 | 
table_with_storage_policy_2            | ALL        |
+   | test_policy           | db2                                     | 
db2_test_1                             | ALL        |
+   
+-----------------------+-----------------------------------------+----------------------------------------+------------+
+   ```
+2. 查看使用存储策略 test_storage_policy 的对象
+
+    ```sql
+    show storage policy using for test_storage_policy;
+    ```
+    ```sql
+    
+---------------------+-----------+---------------------------------+------------+
+    | PolicyName          | Database  | Table                           | 
Partitions |
+    
+---------------------+-----------+---------------------------------+------------+
+    | test_storage_policy | db_1      | partition_with_storage_policy_1 | 
p201701    |
+    | test_storage_policy | db_1      | table_with_storage_policy_1     | ALL  
      |
+    
+---------------------+-----------+---------------------------------+------------+
+   ```
+
diff --git 
a/versioned_docs/version-2.1/sql-manual/sql-statements/cluster-management/storage-management/ALTER-STORAGE-POLICY.md
 
b/versioned_docs/version-2.1/sql-manual/sql-statements/cluster-management/storage-management/ALTER-STORAGE-POLICY.md
index dfecc2131c7..1a95f6998a5 100644
--- 
a/versioned_docs/version-2.1/sql-manual/sql-statements/cluster-management/storage-management/ALTER-STORAGE-POLICY.md
+++ 
b/versioned_docs/version-2.1/sql-manual/sql-statements/cluster-management/storage-management/ALTER-STORAGE-POLICY.md
@@ -1,6 +1,6 @@
 ---
 {
-"title": "ALTER-POLICY",
+"title": "ALTER STORAGE POLICY",
 "language": "en"
 }
 ---
@@ -28,29 +28,51 @@ under the License.
 
 ## Description
 
-This statement is used to modify an existing cold and hot separation migration 
strategy. Only root or admin users can modify resources.
+This statement is used to modify an existing hot-cold tiered migration policy. 
Only root or admin users can modify resources.
 
+## Syntax
 ```sql
-ALTER STORAGE POLICY  'policy_name'
-PROPERTIES ("key"="value", ...);
+ALTER STORAGE POLICY  '<policy_name>' PROPERTIE ("<key>"="<value>"[, ... ]);
 ```
 
+## Required Parameters
+
+
+1.`<policy_name>`  
+> The name of the storage policy. This is the unique identifier of the storage 
policy you want to modify, and an existing policy name must be specified. 
+
+## Optional Parameters
+`PROPERTIE ("<key>"="<value>"[, ... ])` 
+
+1.`retention_days`  
+> Data retention period. Defines the duration for which the data is kept in 
storage. Data exceeding this period will be automatically deleted. 
+
+2.`redundancy_level`
+> Redundancy level. Defines the number of data replicas to ensure high 
availability and fault tolerance. For example, a value of 2 means each data 
block has two replicas. 
+
+3.`storage_type`   
+> Storage type. Specifies the storage medium used, such as SSD, HDD, or hybrid 
storage. This affects performance and cost. 
+
+4.`cooloff_time`    
+> Cool-off time. The time interval between when data is marked for deletion 
and when it is actually deleted. This helps prevent data loss due to accidental 
operations. 
+
+5.`location_policy` 
+> Geographical location policy. Defines the geographical placement of data, 
such as cross-region replication for disaster recovery. 
+
 ## Examples
 
-1. Modify the name to coolown_datetime Cold and hot separation data migration 
time point:
+1. Modify the cooldown_datetime for hot-cold tiered data migration:
 ```sql
 ALTER STORAGE POLICY has_test_policy_to_alter PROPERTIES("cooldown_datetime" = 
"2023-06-08 00:00:00");
 ```
-2. Modify the name to coolown_countdown of hot and cold separation data 
migration of ttl
+2. Modify the cooldown_ttl for hot-cold tiered data migration countdown:
+
 ```sql
 ALTER STORAGE POLICY has_test_policy_to_alter PROPERTIES ("cooldown_ttl" = 
"10000");
+```
+```sql
 ALTER STORAGE POLICY has_test_policy_to_alter PROPERTIES ("cooldown_ttl" = 
"1h");
-ALTER STORAGE POLICY has_test_policy_to_alter PROPERTIES ("cooldown_ttl" = 
"3d");
 ```
-## Keywords
-
 ```sql
-ALTER, STORAGE, POLICY
+ALTER STORAGE POLICY has_test_policy_to_alter PROPERTIES ("cooldown_ttl" = 
"3d");
 ```
-
-## Best Practice
diff --git 
a/versioned_docs/version-2.1/sql-manual/sql-statements/cluster-management/storage-management/SHOW-STORAGE-POLICY-USING.md
 
b/versioned_docs/version-2.1/sql-manual/sql-statements/cluster-management/storage-management/SHOW-STORAGE-POLICY-USING.md
new file mode 100644
index 00000000000..b10edf26562
--- /dev/null
+++ 
b/versioned_docs/version-2.1/sql-manual/sql-statements/cluster-management/storage-management/SHOW-STORAGE-POLICY-USING.md
@@ -0,0 +1,74 @@
+---
+{
+    "title": "SHOW STORAGE POLICY USING",
+    "language": "en"
+}
+---
+
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+
+
+## Description
+
+View all tables and partitions associated with a specified storage policy.
+
+## Syntax
+
+```sql
+SHOW STORAGE POLICY USING [FOR <some_policy>]
+```
+## Optional Parameters
+| Parameter Name          | Description                                        
                 |
+|-------------------|--------------------------------------------------------------|
+| `<policy_name>` | Specifies the name of the storage policy to query. If 
provided, only the details of the specified storage policy will be displayed; 
if not provided, information for all storage policies will be shown. |
+
+## Examples
+
+1. View all objects with a storage policy enabled
+   ```sql
+   show storage policy using;
+   ```
+   ```sql
+   
+-----------------------+-----------------------------------------+----------------------------------------+------------+
+   | PolicyName            | Database                                | Table   
                               | Partitions |
+   
+-----------------------+-----------------------------------------+----------------------------------------+------------+
+   | test_storage_policy   | regression_test_cold_heat_separation_p2 | 
table_with_storage_policy_1            | ALL        |
+   | test_storage_policy   | regression_test_cold_heat_separation_p2 | 
partition_with_multiple_storage_policy | p201701    |
+   | test_storage_policy_2 | regression_test_cold_heat_separation_p2 | 
partition_with_multiple_storage_policy | p201702    |
+   | test_storage_policy_2 | regression_test_cold_heat_separation_p2 | 
table_with_storage_policy_2            | ALL        |
+   | test_policy           | db2                                     | 
db2_test_1                             | ALL        |
+   
+-----------------------+-----------------------------------------+----------------------------------------+------------+
+   ```
+2. View objects that use the storage policy test_storage_policy
+
+    ```sql
+    show storage policy using for test_storage_policy;
+    ```
+    ```sql
+    
+---------------------+-----------+---------------------------------+------------+
+    | PolicyName          | Database  | Table                           | 
Partitions |
+    
+---------------------+-----------+---------------------------------+------------+
+    | test_storage_policy | db_1      | partition_with_storage_policy_1 | 
p201701    |
+    | test_storage_policy | db_1      | table_with_storage_policy_1     | ALL  
      |
+    
+---------------------+-----------+---------------------------------+------------+
+   ```
+
diff --git 
a/versioned_docs/version-3.0/sql-manual/sql-statements/cluster-management/storage-management/ALTER-STORAGE-POLICY.md
 
b/versioned_docs/version-3.0/sql-manual/sql-statements/cluster-management/storage-management/ALTER-STORAGE-POLICY.md
index b960ac4f0e8..32fa7cb4126 100644
--- 
a/versioned_docs/version-3.0/sql-manual/sql-statements/cluster-management/storage-management/ALTER-STORAGE-POLICY.md
+++ 
b/versioned_docs/version-3.0/sql-manual/sql-statements/cluster-management/storage-management/ALTER-STORAGE-POLICY.md
@@ -26,29 +26,51 @@ under the License.
 
 ## Description
 
-This statement is used to modify an existing cold and hot separation migration 
strategy. Only root or admin users can modify resources.
+This statement is used to modify an existing hot-cold tiered migration policy. 
Only root or admin users can modify resources.
 
+## Syntax
 ```sql
-ALTER STORAGE POLICY  'policy_name'
-PROPERTIES ("key"="value", ...);
+ALTER STORAGE POLICY  '<policy_name>' PROPERTIE ("<key>"="<value>"[, ... ]);
 ```
 
-## Example
+## Required Parameters
 
-1. Modify the name to coolown_datetime Cold and hot separation data migration 
time point:
+
+1.`<policy_name>`  
+> The name of the storage policy. This is the unique identifier of the storage 
policy you want to modify, and an existing policy name must be specified. 
+
+## Optional Parameters
+`PROPERTIE ("<key>"="<value>"[, ... ])` 
+
+1.`retention_days`  
+> Data retention period. Defines the duration for which the data is kept in 
storage. Data exceeding this period will be automatically deleted. 
+
+2.`redundancy_level`
+> Redundancy level. Defines the number of data replicas to ensure high 
availability and fault tolerance. For example, a value of 2 means each data 
block has two replicas. 
+
+3.`storage_type`   
+> Storage type. Specifies the storage medium used, such as SSD, HDD, or hybrid 
storage. This affects performance and cost. 
+
+4.`cooloff_time`    
+> Cool-off time. The time interval between when data is marked for deletion 
and when it is actually deleted. This helps prevent data loss due to accidental 
operations. 
+
+5.`location_policy` 
+> Geographical location policy. Defines the geographical placement of data, 
such as cross-region replication for disaster recovery. 
+
+## Examples
+
+1. Modify the cooldown_datetime for hot-cold tiered data migration:
 ```sql
 ALTER STORAGE POLICY has_test_policy_to_alter PROPERTIES("cooldown_datetime" = 
"2023-06-08 00:00:00");
 ```
-2. Modify the name to coolown_countdown of hot and cold separation data 
migration of ttl
+2. Modify the cooldown_ttl for hot-cold tiered data migration countdown:
+
 ```sql
 ALTER STORAGE POLICY has_test_policy_to_alter PROPERTIES ("cooldown_ttl" = 
"10000");
+```
+```sql
 ALTER STORAGE POLICY has_test_policy_to_alter PROPERTIES ("cooldown_ttl" = 
"1h");
-ALTER STORAGE POLICY has_test_policy_to_alter PROPERTIES ("cooldown_ttl" = 
"3d");
 ```
-## Keywords
-
 ```sql
-ALTER, STORAGE, POLICY
+ALTER STORAGE POLICY has_test_policy_to_alter PROPERTIES ("cooldown_ttl" = 
"3d");
 ```
-
-## Best Practice
diff --git 
a/versioned_docs/version-3.0/sql-manual/sql-statements/cluster-management/storage-management/CREATE-STORAGE-VAULT.md
 
b/versioned_docs/version-3.0/sql-manual/sql-statements/cluster-management/storage-management/CREATE-STORAGE-VAULT.md
index 6e4f5657929..074810d0ef3 100644
--- 
a/versioned_docs/version-3.0/sql-manual/sql-statements/cluster-management/storage-management/CREATE-STORAGE-VAULT.md
+++ 
b/versioned_docs/version-3.0/sql-manual/sql-statements/cluster-management/storage-management/CREATE-STORAGE-VAULT.md
@@ -26,57 +26,64 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-## CREATE-STORAGE-VAULT
+## Description
 
-### Description
+This command is used to create a storage vault. The topic of this document 
describes the syntax for creating a self-managed storage vault in Doris.
 
-This command is used to create a storage vault. The subject of this document 
describes the syntax for creating Doris self-maintained storage vault.
+
+## Syntax
 
 ```sql
-CREATE STORAGE VAULT [IF NOT EXISTS] vault
-[properties]
+CREATE STORAGE VAULT [IF NOT EXISTS] <`vault_name`> [ <`properties`> ]
 ```
 
+## Required Parameters
+
+| Parameter     | Description                     |
+|-------|-----------------------|
+| `vault_name` |  The name of the storage vault. This is the unique identifier 
for the new storage vault you are creating. |
 
-#### properties
+## Optional Parameters
+| Parameter   | Description                                                    
     |
+|-------------------|--------------------------------------------------------------|
+| `[IF NOT EXISTS]` | If the specified storage vault already exists, the 
creation operation will not be executed, and no error will be thrown. This 
prevents duplicate creation of the same storage vault. |
+| `PROPERTIES`      | A set of key-value pairs used to set or update specific 
properties of the storage vault. Each property consists of a key (<key>) and a 
value (<value>), separated by an equals sign (=). Multiple key-value pairs are 
separated by commas (,). |
 
-| param  | is required | desc                                                  
 |
-|:-------|:------------|:-------------------------------------------------------|
-| `type` | required    | Only two types of vaults are allowed: `S3` and 
`HDFS`. |
+### S3 Vault
 
-##### S3 Vault
+| Parameter              | Required | Description                              
                                                                        |
+|:----------------|:-----|:--------------------------------------------------------------------------------------------------------|
+| `s3.endpoint`    | Required   | The endpoint for object storage.
+Note: Do not provide a link starting with http:// or https://. For Azure Blob 
Storage, the endpoint is fixed as blob.core.windows.net.。 |
+| `s3.region`      | Required   | The region of your storage bucket. (Not 
required if using GCP or AZURE). |
+| `s3.root.path`   | Required   | The path to store data. |
+| `s3.bucket`      | Required   | The bucket of your object storage account. 
(For Azure, this is the StorageAccount). |
+| `s3.access_key`  | Required   | The access key for your object storage 
account. (For Azure, this is the AccountName). |
+| `s3.secret_key`  | Required   | The secret key for your object storage 
account. (For Azure, this is the AccountKey). |
+| `provider`       | Required   | The cloud provider offering the object 
storage service. Supported values are 
`COS`,`OSS`,`S3`,`OBS`,`BOS`,`AZURE`,`GCP` |
+| `use_path_style` | Optional   | Use `path-style URL (for private deployment 
environments) or `virtual-hosted-style URL`(recommended for public cloud 
environments). Default value is true (path-style).                              
                                                     |
 
-| param           | is required | desc                                         
                                                                                
                                                                                
      |
-|:----------------|:------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| `s3.endpoint`    | required    | The endpoint used for object storage. 
<br/>**Notice**, please don't provide the endpoint with any `http://` or 
`https://`. And for Azure Blob Storage, the endpoint should be 
`blob.core.windows.net`. |
-| `s3.region`      | required    | The region of your bucket.(If you are using 
GCP or Azure, you can specify us-east-1).                                       
                                                                                
        |
-| `s3.root.path`   | required    | The path where the data would be stored.    
                                                                                
                                            |
-| `s3.bucket`      | required    | The bucket of your object storage account. 
(StorageAccount if you're using Azure).                                         
                                                                                
       |
-| `s3.access_key`  | required    | The access key of your object storage 
account. (AccountName if you're using Azure).                                   
                                                                                
             |
-| `s3.secret_key`  | required    | The secret key of your object storage 
account. (AccountKey if you're using Azure).                                    
                                                                                
            |
-| `provider`       | required    | The cloud vendor which provides the object 
storage service. The supported values include `COS`, `OSS`, `S3`, `OBS`, `BOS`, 
`AZURE`, `GCP`                                                                  
                                                              |
-| `use_path_style` | optional    | Indicate using `path-style URL`(private 
environment recommended) or `virtual-hosted-style URL`(public cloud 
recommended), default `true` (`path-style`)                                     
                                                                          |
+### HDFS vault
 
-##### HDFS Vault
+| Parameter                               | Required | Description             
                                       |
+|:---------------------------------|:-----|:------------------------------------------------------|
+| `fs.defaultFS`                   |Required| Hadoop configuration property 
specifying the default file system to use.                             |
+| `path_prefix`                    |Optional| The prefix path for storing 
data. If not specified, the default path under the user account will be used.   
                |
+| `hadoop.username`                |Optional| Hadoop configuration property 
specifying the user to access the file system. If not specified, the user who 
started the Hadoop process will be used. |
+| `hadoop.security.authentication` |Optional| The authentication method for 
Hadoop. If you want to use Kerberos, you can specify kerberos.      |
+| `hadoop.kerberos.principal`      |Optional| The path to your Kerberos 
principal.      |
+| `hadoop.kerberos.keytab`         |Optional| The path to your Kerberos 
keytab.     |
 
-| param                            | is required | desc                        
                                                                                
                                                 |
-|:---------------------------------|:------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| `fs.defaultFS`                   | required    | Hadoop configuration 
property that specifies the default file system to use.                         
                                                        |
-| `path_prefix`                    | optional    | The path prefix to where 
the data would be stored. It would be the root_path of your Hadoop user if you 
don't provide any prefix.                            |
-| `hadoop.username`                | optional    | Hadoop configuration 
property that specifies the user accessing the file system. It would be the 
user starting Hadoop process if you don't provide any user. |
-| `hadoop.security.authentication` | optional    | The authentication way used 
for hadoop. If you'd like to use kerberos you can provide with `kerboros`.      
                                                 |
-| `hadoop.kerberos.principal`      | optional    | The path to your kerberos 
principal.                                                       |
-| `hadoop.kerberos.keytab`         | optional    | The path to your kerberos 
keytab.                                                       |
+## Examples
 
-### Example
+1. Create HDFS storage vault。
 
-1. create a HDFS storage vault.
     ```sql
     CREATE STORAGE VAULT IF NOT EXISTS hdfs_vault_demo
     PROPERTIES (
         "type" = "hdfs",                                     -- required
         "fs.defaultFS" = "hdfs://127.0.0.1:8020",            -- required
-        "path_prefix" = "big/data",                          -- optional
+        "path_prefix" = "big/data",                          -- optional,  
generally fill in according to the business name
         "hadoop.username" = "user"                           -- optional
         "hadoop.security.authentication" = "kerberos"        -- optional
         "hadoop.kerberos.principal" = "hadoop/127.0.0.1@XXX" -- optional
@@ -84,7 +91,7 @@ CREATE STORAGE VAULT [IF NOT EXISTS] vault
     );
     ```
 
-2. create a S3 storage vault using OSS.
+2. Create OSS storage vault。
     ```sql
     CREATE STORAGE VAULT IF NOT EXISTS oss_demo_vault
     PROPERTIES (
@@ -96,11 +103,11 @@ CREATE STORAGE VAULT [IF NOT EXISTS] vault
         "s3.root.path" = "oss_demo_vault_prefix",            -- required
         "s3.bucket" = "xxxxxx",                              -- required,  
Your OSS bucket name
         "provider" = "OSS",                                  -- required
-        "use_path_style" = "false"                           -- optional,  OSS 
suggest setting `false`
+        "use_path_style" = "false"                           -- optional,  OSS 
recommended to set false
     );
     ```
 
-3. create a S3 storage vault using COS.
+3. Create COS storage vault。
     ```sql
     CREATE STORAGE VAULT IF NOT EXISTS cos_demo_vault
     PROPERTIES (
@@ -112,11 +119,11 @@ CREATE STORAGE VAULT [IF NOT EXISTS] vault
         "s3.root.path" = "cos_demo_vault_prefix",            -- required
         "s3.bucket" = "xxxxxx",                              -- required,  
Your COS bucket name
         "provider" = "COS",                                  -- required
-        "use_path_style" = "false"                           -- optional,  COS 
suggest setting `false`
+        "use_path_style" = "false"                           -- optional,  COS 
recommended to set false
     );
     ```
 
-4. create a S3 storage vault using OBS.
+4. Create OBS storage vault。
     ```sql
     CREATE STORAGE VAULT IF NOT EXISTS obs_demo_vault
     PROPERTIES (
@@ -128,11 +135,11 @@ CREATE STORAGE VAULT [IF NOT EXISTS] vault
         "s3.root.path" = "obs_demo_vault_prefix",            -- required
         "s3.bucket" = "xxxxxx",                              -- required,  
Your OBS bucket name
         "provider" = "OBS",                                  -- required
-        "use_path_style" = "false"                           -- optional,  OBS 
suggest setting `false`
+        "use_path_style" = "false"                           -- optional,  OBS 
recommended to set false
     );
     ```
 
-5. create a S3 storage vault using BOS.
+5. Create BOS storage vault。
     ```sql
     CREATE STORAGE VAULT IF NOT EXISTS obs_demo_vault
     PROPERTIES (
@@ -144,11 +151,11 @@ CREATE STORAGE VAULT [IF NOT EXISTS] vault
         "s3.root.path" = "bos_demo_vault_prefix",            -- required
         "s3.bucket" = "xxxxxx",                              -- required,  
Your BOS bucket name
         "provider" = "BOS",                                  -- required
-        "use_path_style" = "false"                           -- optional,  BOS 
suggest setting `false`
+        "use_path_style" = "false"                           -- optional,  BOS 
recommended to set false
     );
     ```
 
-6. create a S3 storage vault using AWS.
+6. Create S3 storage vault。
     ```sql
     CREATE STORAGE VAULT IF NOT EXISTS s3_demo_vault
     PROPERTIES (
@@ -160,10 +167,11 @@ CREATE STORAGE VAULT [IF NOT EXISTS] vault
         "s3.root.path" = "s3_demo_vault_prefix",            -- required
         "s3.bucket" = "xxxxxx",                             -- required,  Your 
S3 bucket name
         "provider" = "S3",                                  -- required
-        "use_path_style" = "false"                          -- optional,  S3 
suggest setting `false`
+        "use_path_style" = "false"                          -- optional,  S3 
recommended to set false
     );
     ```
-7. create a S3 storage vault using MinIO.
+
+7. Create  MinIO storage vault。
    ```sql
     CREATE STORAGE VAULT IF NOT EXISTS minio_demo_vault
     PROPERTIES (
@@ -175,11 +183,11 @@ CREATE STORAGE VAULT [IF NOT EXISTS] vault
         "s3.root.path" = "minio_demo_vault_prefix",        -- required
         "s3.bucket" = "xxxxxx",                            -- required,  Your 
minio bucket name
         "provider" = "S3",                                 -- required
-        "use_path_style" = "true"                          -- required,  minio 
suggest setting `true`
+        "use_path_style" = "true"                          -- required,  minio 
recommended to set false
     );
    ```
 
-8. create a S3 storage vault using AZURE.
+8. Create AZURE storage vault。
     ```sql
     CREATE STORAGE VAULT IF NOT EXISTS azure_demo_vault
     PROPERTIES (
@@ -194,7 +202,7 @@ CREATE STORAGE VAULT [IF NOT EXISTS] vault
     );
     ```
 
-9. create a S3 storage vault using GCP.
+9. Create GCP storage vault。
     ```sql
     CREATE STORAGE VAULT IF NOT EXISTS gcp_demo_vault
     PROPERTIES (
@@ -209,12 +217,4 @@ CREATE STORAGE VAULT [IF NOT EXISTS] vault
     );
     ```
 
-**Note**
-
-[The s3.access_key corresponds to the Access ID of the GCP HMAC 
key](https://cloud.google.com/storage/docs/authentication/hmackeys)
-
-[The s3.secret_key corresponds to the Secret of the GCP HMAC 
key](https://cloud.google.com/storage/docs/authentication/hmackeys)
-
-### Keywords
 
-    CREATE, STORAGE VAULT
diff --git 
a/versioned_docs/version-3.0/sql-manual/sql-statements/cluster-management/storage-management/SET-DEFAULT-STORAGE-VAULT.md
 
b/versioned_docs/version-3.0/sql-manual/sql-statements/cluster-management/storage-management/SET-DEFAULT-STORAGE-VAULT.md
index 4a087444771..340c9c2595d 100644
--- 
a/versioned_docs/version-3.0/sql-manual/sql-statements/cluster-management/storage-management/SET-DEFAULT-STORAGE-VAULT.md
+++ 
b/versioned_docs/version-3.0/sql-manual/sql-statements/cluster-management/storage-management/SET-DEFAULT-STORAGE-VAULT.md
@@ -26,28 +26,28 @@ under the License.
 
 ## Description
 
-This statement is used to set the default storage vault in Doris. The default 
storage vault is used to store data for internal or system tables. If the 
default storage vault is not set, Doris will not function properly. Once the 
default storage vault is set, it cannot be removed.
+This statement is used to set the default storage vault in Doris. The default 
storage vault is used to store data for internal or system tables. If the 
default storage vault is not set, Doris will not be able to operate normally. 
Once a default storage vault is set, it cannot be removed.
+
 
 ## Syntax
 
 ```sql
-SET vault_name DEFAULT STORAGE VAULT
+SET <vault_name> DEFAULT STORAGE VAULT
 ```
 
-> Note:
->
+## Required Parameters
+
+| Parameter Name          | Description                                        
                 |
+|-------------------|--------------------------------------------------------------|
+| `<vault_name>`    | The name of the storage vault. This is the unique 
identifier of the vault you want to set as the default storage vault.           
|
+
+## Usage Notes:
 > 1. Only ADMIN users can set the default storage vault.
 
-## Example
+## Examples
 
-1. Set the storage vault named 's3_vault' as the default storage vault.
+1. Set the storage vault named s3_vault as the default storage vault.
 
    ```sql
    SET s3_vault AS DEFAULT STORAGE VAULT;
    ```
-
-## Related Commands
-
-## Keywords
-
-    SET, DEFAULT, STORAGE, VAULT
\ No newline at end of file
diff --git 
a/versioned_docs/version-3.0/sql-manual/sql-statements/cluster-management/storage-management/SHOW-CACHE-HOTSPOT.md
 
b/versioned_docs/version-3.0/sql-manual/sql-statements/cluster-management/storage-management/SHOW-CACHE-HOTSPOT.md
index 74f2cc53f9d..bc9a846949e 100644
--- 
a/versioned_docs/version-3.0/sql-manual/sql-statements/cluster-management/storage-management/SHOW-CACHE-HOTSPOT.md
+++ 
b/versioned_docs/version-3.0/sql-manual/sql-statements/cluster-management/storage-management/SHOW-CACHE-HOTSPOT.md
@@ -33,7 +33,7 @@ This statement is used to display the hotspot information of 
the file cache.
 
 Before version 3.0.4, you could use the `SHOW CACHE HOTSPOT` statement to 
query cache hotspot information statistics. Starting from version 3.0.4, the 
use of the `SHOW CACHE HOTSPOT` statement for cache hotspot information 
statistics is no longer supported. Please directly access the system table 
`__internal_schema.cloud_cache_hotspot` for queries. For detailed usage, refer 
to [MANAGING FILE CACHE](../../../../compute-storage-decoupled/file-cache). 
 
-:::
+
 
 ## Syntax
 
@@ -44,27 +44,26 @@ Before version 3.0.4, you could use the `SHOW CACHE 
HOTSPOT` statement to query
 
 ## Parameters
 
-| Parameter Name         | Description                    |
-| ---------------------- | ------------------------------ |
-| `<compute_group_name>` | The name of the compute group. |
-| `<table_name>`         | The name of the table.         |
-
+| Parameter Name                         | Description                         
                                |
+|---------------------------|--------------------------------------------------------------|
+| <compute_group_name>        | The name of the compute group.                 
                              |
+| <table_name>                | The name of the table.                         
                          |
 ## Examples
 
-1. Display the cache hotspot information for the entire system.
+1. Display cache hot spot information for the entire system:
 
-    ```sql
-    SHOW CACHE HOTSPOT '/';
-    ```
-
-2. Display the cache hotspot information for a specific compute group named 
`my_compute_group`.
+```sql
+SHOW CACHE HOTSPOT '/';
+```
 
+2. Display cache hot spot information for a specific compute group 
my_compute_group:
 
-    ```sql
-    SHOW CACHE HOTSPOT '/my_compute_group/';
-    ```
+```sql
+SHOW CACHE HOTSPOT '/my_compute_group/';
+```
 
 ## References
 
-- [MANAGING FILE CACHE](../../../../compute-storage-decoupled/file-cache)
-- [WARMUP CACHE](./WARM-UP)
+
+- [WARMUP 
CACHE](../Database-Administration-Statements/WARM-UP-COMPUTE-GROUP.md)
+- [MANAGING FILE CACHE](../../../compute-storage-decoupled/file-cache.md)
diff --git 
a/versioned_docs/version-3.0/sql-manual/sql-statements/cluster-management/storage-management/SHOW-STORAGE-POLICY-USING.md
 
b/versioned_docs/version-3.0/sql-manual/sql-statements/cluster-management/storage-management/SHOW-STORAGE-POLICY-USING.md
new file mode 100644
index 00000000000..7c5f8278acb
--- /dev/null
+++ 
b/versioned_docs/version-3.0/sql-manual/sql-statements/cluster-management/storage-management/SHOW-STORAGE-POLICY-USING.md
@@ -0,0 +1,71 @@
+---
+{
+    "title": "SHOW STORAGE POLICY USING",
+    "language": "en"
+}
+---
+
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+## Description
+
+View all tables and partitions associated with a specified storage policy.
+
+## Syntax
+
+```sql
+SHOW STORAGE POLICY USING [FOR <some_policy>]
+```
+## Optional Parameters
+| Parameter Name          | Description                                        
                 |
+|-------------------|--------------------------------------------------------------|
+| `<policy_name>` | Specifies the name of the storage policy to query. If 
provided, only the details of the specified storage policy will be displayed; 
if not provided, information for all storage policies will be shown. |
+
+## Examples
+
+1. View all objects with a storage policy enabled
+   ```sql
+   show storage policy using;
+   ```
+   ```sql
+   
+-----------------------+-----------------------------------------+----------------------------------------+------------+
+   | PolicyName            | Database                                | Table   
                               | Partitions |
+   
+-----------------------+-----------------------------------------+----------------------------------------+------------+
+   | test_storage_policy   | regression_test_cold_heat_separation_p2 | 
table_with_storage_policy_1            | ALL        |
+   | test_storage_policy   | regression_test_cold_heat_separation_p2 | 
partition_with_multiple_storage_policy | p201701    |
+   | test_storage_policy_2 | regression_test_cold_heat_separation_p2 | 
partition_with_multiple_storage_policy | p201702    |
+   | test_storage_policy_2 | regression_test_cold_heat_separation_p2 | 
table_with_storage_policy_2            | ALL        |
+   | test_policy           | db2                                     | 
db2_test_1                             | ALL        |
+   
+-----------------------+-----------------------------------------+----------------------------------------+------------+
+   ```
+2. View objects that use the storage policy test_storage_policy
+
+    ```sql
+    show storage policy using for test_storage_policy;
+    ```
+    ```sql
+    
+---------------------+-----------+---------------------------------+------------+
+    | PolicyName          | Database  | Table                           | 
Partitions |
+    
+---------------------+-----------+---------------------------------+------------+
+    | test_storage_policy | db_1      | partition_with_storage_policy_1 | 
p201701    |
+    | test_storage_policy | db_1      | table_with_storage_policy_1     | ALL  
      |
+    
+---------------------+-----------+---------------------------------+------------+
+   ```
+
diff --git 
a/versioned_docs/version-3.0/sql-manual/sql-statements/cluster-management/storage-management/SHOW-WARM-UP-JOB.md
 
b/versioned_docs/version-3.0/sql-manual/sql-statements/cluster-management/storage-management/SHOW-WARM-UP-JOB.md
index e52bb151365..931ac9c482d 100644
--- 
a/versioned_docs/version-3.0/sql-manual/sql-statements/cluster-management/storage-management/SHOW-WARM-UP-JOB.md
+++ 
b/versioned_docs/version-3.0/sql-manual/sql-statements/cluster-management/storage-management/SHOW-WARM-UP-JOB.md
@@ -36,31 +36,26 @@ The commands are used to display warm-up jobs in Doris.
 
 ## Parameters
 
-* id : id of the warm-up job.
 
-## Example
+| Parameter Name                  | Description                                
                         |
+|---------------------------|--------------------------------------------------------------|
+| id                        | The ID of the warm-up job                        
                        |
+## Examples
 
-1. View all warmup job
+1. View all warm-up jobs:
 
-    ```sql
+ ```sql
     SHOW WARM UP JOB;
-    ```
+```
 
-2. View one warmup job with id = 13418
+2. View the warm-up job with ID 13418:
 
 ```sql
-    SHOW WARM UP JOB WHERE id = 13418;
+   SHOW WARM UP JOB WHERE id = 13418;
 ```
 
+
 ## Related Commands
 
  - [WARMUP COMPUTE GROUP](./WARM-UP.md)
 
-## References
-
- - [MANAGING FILE CACHE](../../../../compute-storage-decoupled/file-cache)
-
-## Keywords
-
-    SHOW, CACHE, HOTSPOT, COMPUTE GROUP 
-
diff --git 
a/versioned_docs/version-3.0/sql-manual/sql-statements/cluster-management/storage-management/WARM-UP.md
 
b/versioned_docs/version-3.0/sql-manual/sql-statements/cluster-management/storage-management/WARM-UP.md
index e5b504540b3..c2f39f6cc55 100644
--- 
a/versioned_docs/version-3.0/sql-manual/sql-statements/cluster-management/storage-management/WARM-UP.md
+++ 
b/versioned_docs/version-3.0/sql-manual/sql-statements/cluster-management/storage-management/WARM-UP.md
@@ -26,56 +26,52 @@ under the License.
 
 ## Description
 
-The WARM UP COMPUTE GROUP statement is used to warm up data in a compute group 
to improve query performance. The warming operation can either retrieve 
resources from another compute group or specify particular tables and 
partitions for warming. The warming operation returns a job ID that can be used 
to track the status of the warming job.
+The `WARM UP COMPUTE GROUP` statement is used to warm up data in a compute 
group to improve query performance. The warm-up operation can fetch resources 
from another compute group or specify particular tables and partitions for 
warming up. The warm-up operation returns a job ID that can be used to track 
the status of the warm-up job.
+
 
 ## Syntax
 
 ```sql
-WARM UP COMPUTE GROUP <destination_compute_group_name> WITH COMPUTE GROUP 
<source_compute_group_name>;
-
+WARM UP COMPUTE GROUP <destination_compute_group_name> WITH COMPUTE GROUP 
<source_compute_group_name> FORCE;
+```
+```sql
 WARM UP COMPUTE GROUP <destination_compute_group_name> WITH <warm_up_list>;
-
+```
+```sql
 warm_up_list ::= warm_up_item [AND warm_up_item...];
-
+```
+```sql
 warm_up_item ::= TABLE <table_name> [PARTITION <partition_name>];
 
 ```
-
 ## Parameters
 
-* destination_compute_group_name: The name of the destination compute group 
that is to be warmed up.
+| Parameter Name                  | Description                                
                         |
+|---------------------------|--------------------------------------------------------------|
+| destination_compute_group_name | The name of the target compute group to be 
warmed up.                                   |
+| source_compute_group_name  | The name of the source compute group from which 
resources are obtained.                                 |
+| warm_up_list              | A list of specific items to be warmed up, which 
can include tables and partitions.                   |
+| table_name                | The name of the table used for warming up.       
                                  |
+| partition_name            | The name of the partition used for warming up.   
                                    |
 
-* source_compute_group_name(Optional) The name of the source cluster from 
which resources will be warmed up.
+## Return Value
 
-* warm_up_list: (Optional) A list of specific items to warm up, which can 
include tables and partitions.
+* JobId:  The ID of the warm-up job.
 
-* table_name: The name of the table is used to warmup.
+## Examples
 
-* partition_name: The name of the partition is used to warmup.
-
-## Return Values
-
-* JobId: the id of warm-up job.
-
-## Example
-
-1. Warm up a compute group named destination_group_name with a compute group 
named source_group_name.
+1. Warm up the compute group named destination_group_name using the compute 
group named source_group_name
 
 ```sql
    WARM UP COMPUTE GROUP destination_group_name WITH COMPUTE GROUP 
source_group_name;
-
 ```
 
-2. Warm up a compute group named destination_group with table sales_data and 
customer_info and partition q1_2024 of table orders .
+2. Warm up the tables sales_data and customer_info, and the partition q1_2024 
of the table orders using the compute group named destination_group.
 
-```
+```sql
     WARM UP COMPUTE GROUP destination_group WITH 
         TABLE sales_data 
         AND TABLE customer_info 
         AND TABLE orders PARTITION q1_2024;
 
-```
-
-## Keywords
-
-    WARM UP, COMPUTE GROUP, CACHE
+```
\ No newline at end of file


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org
For additional commands, e-mail: commits-h...@doris.apache.org


Reply via email to