This is an automated email from the ASF dual-hosted git repository.

dataroaring pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/doris-website.git


The following commit(s) were added to refs/heads/master by this push:
     new 9089e407a6 [doc](tiered) Add dynamic partition migration Pick #874 to 
2.1 and 3.0 (#1182)
9089e407a6 is described below

commit 9089e407a603f3ebe589cf8764efa0dfc3284aac
Author: deardeng <565620...@qq.com>
AuthorDate: Tue Oct 29 09:58:19 2024 +0800

    [doc](tiered) Add dynamic partition migration Pick #874 to 2.1 and 3.0 
(#1182)
    
    pick
    https://github.com/apache/doris-website/pull/874/files
    
    # Versions
    
    
    - [x] 3.0
    - [x] 2.1
    
    
    # Languages
    
    - [x] Chinese
    - [x] English
---
 .../Alter/ALTER-TABLE-PROPERTY.md                  |  11 +
 .../Create/CREATE-DATABASE.md                      |   4 +
 .../Create/CREATE-TABLE.md                         |  14 +-
 .../version-2.1/table-design/data-partition.md     |  34 +--
 .../tiered-storage/diff-disk-medium-migration.md   |  62 ++++++
 .../table-design/tiered-storage/remote-storage.md  | 237 ++++++++++++++++++++
 .../Alter/ALTER-TABLE-PROPERTY.md                  |  11 +
 .../Create/CREATE-DATABASE.md                      |   4 +
 .../Create/CREATE-TABLE.md                         |  12 +-
 .../version-3.0/table-design/data-partition.md     |  35 +--
 .../tiered-storage/diff-disk-medium-migration.md   |  62 ++++++
 .../table-design/tiered-storage/remote-storage.md  | 237 ++++++++++++++++++++
 .../Alter/ALTER-TABLE-PROPERTY.md                  |  11 +
 .../Create/CREATE-DATABASE.md                      |   4 +
 .../Create/CREATE-TABLE.md                         |  12 +-
 .../version-2.1/table-design/data-partition.md     |  41 +---
 .../tiered-storage/diff-disk-medium-migration.md   |  66 ++++++
 .../table-design/tiered-storage/remote-storage.md  | 238 +++++++++++++++++++++
 .../Alter/ALTER-TABLE-PROPERTY.md                  |  11 +
 .../Create/CREATE-DATABASE.md                      |   4 +
 .../Create/CREATE-TABLE.md                         |  12 +-
 .../version-3.0/table-design/data-partition.md     |  41 +---
 .../tiered-storage/diff-disk-medium-migration.md   |  66 ++++++
 .../table-design/tiered-storage/remote-storage.md  | 238 +++++++++++++++++++++
 versioned_sidebars/version-2.1-sidebars.json       |   8 +
 versioned_sidebars/version-3.0-sidebars.json       |   8 +
 26 files changed, 1314 insertions(+), 169 deletions(-)

diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/sql-manual/sql-statements/Data-Definition-Statements/Alter/ALTER-TABLE-PROPERTY.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/sql-manual/sql-statements/Data-Definition-Statements/Alter/ALTER-TABLE-PROPERTY.md
index c3466b10ea..fcccb1e046 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/sql-manual/sql-statements/Data-Definition-Statements/Alter/ALTER-TABLE-PROPERTY.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/sql-manual/sql-statements/Data-Definition-Statements/Alter/ALTER-TABLE-PROPERTY.md
@@ -30,6 +30,17 @@ under the License.
 
 ALTER TABLE PROPERTY
 
+:::caution
+分区属性与表属性的一些区别
+- 
分区属性一般主要关注分桶数(buckets)、存储介质(storage_medium)、副本数(replication)、冷热分离存储策略(storage_policy);
+  - 对于已经创建的分区,可以使用alter table {tableName} modify partition({partitionName}) 
set ({key}={value}) 来修改,但是分桶数(buckets)不能修改;
+  - 对于未创建的动态分区(dynamic partition),可以使用alter table {tableName} set 
(dynamic_partition.{key} = {value}) 来修改其属性;
+  - 对于未创建的自动分区(auto partition),可以使用alter table {tableName} set ({key} = 
{value}) 来修改其属性;
+  - 若用户想修改分区的属性,需要修改已经创建分区的属性,同时也要修改未创建分区的属性
+- 除了上面几个属性,其他均为表级别属性
+- 
具体属性可以参考[建表属性](../../../../sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-TABLE.md#properties)
+:::
+
 ### Description
 
 该语句用于对已有 table 的 property 进行修改操作。这个操作是同步的,命令返回表示执行完毕。
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-DATABASE.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-DATABASE.md
index 32977bdf95..7da33873ff 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-DATABASE.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-DATABASE.md
@@ -68,6 +68,10 @@ CREATE DATABASE [IF NOT EXISTS] db_name
    );
    ```
 
+:::caution
+若建表语句属性key带replication_allocation或replication_num,则db的默认的副本分布策略不会生效
+:::
+
 ### Keywords
 
 ```text
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-TABLE.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-TABLE.md
index b3d2fee2f5..a1db67dc51 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-TABLE.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-TABLE.md
@@ -476,17 +476,9 @@ UNIQUE KEY(k1, k2)
 
 * 动态分区相关
 
-    动态分区相关参数如下:
-
-    * `dynamic_partition.enable`: 用于指定表级别的动态分区功能是否开启。默认为 true。
-    * `dynamic_partition.time_unit:` 用于指定动态添加分区的时间单位,可选择为 
DAY(天),WEEK(周),MONTH(月),YEAR(年),HOUR(时)。
-    * `dynamic_partition.start`: 用于指定向前删除多少个分区。值必须小于 0。默认为 Integer.MIN_VALUE。
-    * `dynamic_partition.end`: 用于指定提前创建的分区数量。值必须大于 0。
-    * `dynamic_partition.prefix`: 用于指定创建的分区名前缀,例如分区名前缀为 p,则自动创建分区名为 p20200108。
-    * `dynamic_partition.buckets`: 用于指定自动创建的分区分桶数量。
-    * `dynamic_partition.create_history_partition`: 是否创建历史分区。
-    * `dynamic_partition.history_partition_num`: 指定创建历史分区的数量。
-    * `dynamic_partition.reserved_history_periods`: 用于指定保留的历史分区的时间段。
+
+    动态分区相关参考[分区分桶-动态分区](../../../../table-design/data-partition.md#动态分区)
+
 
 ### Example
 
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/table-design/data-partition.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/table-design/data-partition.md
index f66caf4063..1fce8fa8c5 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/table-design/data-partition.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/table-design/data-partition.md
@@ -345,7 +345,7 @@ ERROR 5025 (HY000): Insert has filtered data in strict 
mode, tracking_url=......
 
 动态分区只支持在 DATE/DATETIME 列上进行 Range 类型的分区。
 
-动态分区适用于分区列的时间数据随现实世界同步增长的情况。此时可以灵活的按照与现实世界同步的时间维度对数据进行分区,自动地根据设置对数据进行冷热分层或者回收。
+动态分区适用于分区列的时间数据随现实世界同步增长的情况。此时可以灵活的按照与现实世界同步的时间维度对数据进行分区。
 
 对于更为灵活,适用场景更多的数据入库分区,请参阅[自动分区](#自动分区)功能。
 
@@ -413,6 +413,10 @@ ERROR 5025 (HY000): Insert has filtered data in strict 
mode, tracking_url=......
 
   动态分区的起始偏移,为负数。根据 `time_unit` 
属性的不同,以当天(星期/月)为基准,分区范围在此偏移之前的分区将会被删除。如果不填写,则默认为 
`-2147483648`,即不删除历史分区。此偏移之后至当前时间的历史分区如不存在,是否创建取决于 
`dynamic_partition.create_history_partition`。
 
+:::caution
+  注意,若用户设置了history_partition_num(>0),创建动态分区的起始分区就会用max(start, 
-history_partition_num),删除历史分区的时候仍然会保留到start的范围,其中start < 0。
+:::
+
 - `dynamic_partition.end`**(必选参数)**
 
   动态分区的结束偏移,为正数。根据 `time_unit` 属性的不同,以当天(星期/月)为基准,提前创建对应范围的分区。
@@ -437,6 +441,8 @@ ERROR 5025 (HY000): Insert has filtered data in strict 
mode, tracking_url=......
 
   当 `time_unit` 为 `MONTH` 时,该参数用于指定每月的起始日期。取值为 1 到 28。其中 1 表示每月 1 号,28 表示每月 28 
号。默认为 1,即表示每月以 1 号为起始点。暂不支持以 29、30、31 号为起始日,以避免因闰年或闰月带来的歧义。
 
+- doris支持SSD和HDD层级存储,可参考[分层存储](./tiered-storage/diff-disk-medium-migration.md)
+
 - `dynamic_partition.create_history_partition`
 
   默认为 false。当置为 true 时,Doris 会自动创建所有分区,具体创建规则见下文。同时,FE 的参数 
`max_dynamic_partition_num` 会限制总分区数量,以避免一次性创建过多分区。当期望创建的分区个数大于 
`max_dynamic_partition_num` 值时,操作将被禁止。
@@ -447,26 +453,6 @@ ERROR 5025 (HY000): Insert has filtered data in strict 
mode, tracking_url=......
 
   当`create_history_partition` 为 `true` 时,该参数用于指定创建历史分区数量。默认值为 -1,即未设置。**该变量与 
`dynamic_partition.start` 作用相同,建议同时只设置一个。**
 
--  `dynamic_partition.hot_partition_num`
-
-  指定最新的多少个分区为热分区。对于热分区,系统会自动设置其 `storage_medium` 参数为 SSD,并且设置 
`storage_cooldown_time`。
-
-  注意:若存储路径下没有 SSD 磁盘路径,配置该参数会导致动态分区创建失败。
-
-  `hot_partition_num` 是往前 n 天和未来所有分区
-
-  我们举例说明。假设今天是 2021-05-20,按天分区,动态分区的属性设置为:hot_partition_num=2, end=3, 
start=-3。则系统会自动创建以下分区,并且设置 `storage_medium` 和 `storage_cooldown_time` 参数:
-
-  ```Plain
-  p20210517:["2021-05-17", "2021-05-18") storage_medium=HDD 
storage_cooldown_time=9999-12-31 23:59:59
-  p20210518:["2021-05-18", "2021-05-19") storage_medium=HDD 
storage_cooldown_time=9999-12-31 23:59:59
-  p20210519:["2021-05-19", "2021-05-20") storage_medium=SSD 
storage_cooldown_time=2021-05-21 00:00:00
-  p20210520:["2021-05-20", "2021-05-21") storage_medium=SSD 
storage_cooldown_time=2021-05-22 00:00:00
-  p20210521:["2021-05-21", "2021-05-22") storage_medium=SSD 
storage_cooldown_time=2021-05-23 00:00:00
-  p20210522:["2021-05-22", "2021-05-23") storage_medium=SSD 
storage_cooldown_time=2021-05-24 00:00:00
-  p20210523:["2021-05-23", "2021-05-24") storage_medium=SSD 
storage_cooldown_time=2021-05-25 00:00:00
-  ```
-
 -   `dynamic_partition.reserved_history_periods`
 
   需要保留的历史分区的时间范围。当`dynamic_partition.time_unit` 设置为 "DAY/WEEK/MONTH/YEAR" 
时,需要以 `[yyyy-MM-dd,yyyy-MM-dd],[...,...]` 格式进行设置。当`dynamic_partition.time_unit` 
设置为 "HOUR" 时,需要以 `[yyyy-MM-dd HH:mm:ss,yyyy-MM-dd HH:mm:ss],[...,...]` 
的格式来进行设置。如果不设置,默认为 `"NULL"`。
@@ -494,12 +480,6 @@ ERROR 5025 (HY000): Insert has filtered data in strict 
mode, tracking_url=......
 
   这两个时间段的分区。其中,`reserved_history_periods` 的每一个 `[...,...]` 
是一对设置项,两者需要同时被设置,且第一个时间不能大于第二个时间。
 
--   `dynamic_partition.storage_medium`
-
-  指定创建的动态分区的默认存储介质。默认是 HDD,可选择 SSD。
-
-  注意,当设置为 SSD 时,`hot_partition_num` 属性将不再生效,所有分区将默认为 SSD 存储介质并且冷却时间为 
9999-12-31 23:59:59。
-
 ### 创建历史分区规则
 
 当 create_history_partition 为 true,即开启创建历史分区功能时,Doris 会根据 
dynamic_partition.start 和 dynamic_partition.history_partition_num 来决定创建历史分区的个数。
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/table-design/tiered-storage/diff-disk-medium-migration.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/table-design/tiered-storage/diff-disk-medium-migration.md
new file mode 100644
index 0000000000..78bb012a51
--- /dev/null
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/table-design/tiered-storage/diff-disk-medium-migration.md
@@ -0,0 +1,62 @@
+---
+{
+    "title": "SSD 和 HDD 层级存储",
+    "language": "zh-CN"
+}
+---
+
+<!-- 
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+Doris支持(HDD、SSD)磁盘类型间数据根据数据冷热特性进行迁移,加速读写性能。用户可以设置分区参数将动态分区建在相应be的磁盘类型上
+
+其中dynamic_partition参数可以参考[分区分桶-动态分区](../../table-design/data-partition.md#动态分区)
+
+`dynamic_partition.hot_partition_num`
+
+:::caution
+  注意,dynamic_partition.storage_medium必须设置为HDD,否则hot_partition_num将不会生效
+:::
+
+  指定最新的多少个分区为热分区。对于热分区,系统会自动设置其 `storage_medium` 参数为 SSD,并且设置 
`storage_cooldown_time`。
+
+  注意:若存储路径下没有 SSD 磁盘路径,配置该参数会导致动态分区创建失败。
+
+  `hot_partition_num` 是往前 n 天和未来所有分区
+
+  我们举例说明。假设今天是 2021-05-20,按天分区,动态分区的属性设置为:hot_partition_num=2, end=3, 
start=-3。则系统会自动创建以下分区,并且设置 `storage_medium` 和 `storage_cooldown_time` 参数:
+
+  ```Plain
+  p20210517:["2021-05-17", "2021-05-18") storage_medium=HDD 
storage_cooldown_time=9999-12-31 23:59:59
+  p20210518:["2021-05-18", "2021-05-19") storage_medium=HDD 
storage_cooldown_time=9999-12-31 23:59:59
+  p20210519:["2021-05-19", "2021-05-20") storage_medium=SSD 
storage_cooldown_time=2021-05-21 00:00:00
+  p20210520:["2021-05-20", "2021-05-21") storage_medium=SSD 
storage_cooldown_time=2021-05-22 00:00:00
+  p20210521:["2021-05-21", "2021-05-22") storage_medium=SSD 
storage_cooldown_time=2021-05-23 00:00:00
+  p20210522:["2021-05-22", "2021-05-23") storage_medium=SSD 
storage_cooldown_time=2021-05-24 00:00:00
+  p20210523:["2021-05-23", "2021-05-24") storage_medium=SSD 
storage_cooldown_time=2021-05-25 00:00:00
+  ```
+
+
+-   `dynamic_partition.storage_medium`
+
+指定创建的动态分区的默认存储介质。默认是 HDD,可选择 SSD。
+
+:::caution
+  注意,当设置为 SSD 时,`hot_partition_num` 属性将不再生效,所有分区将默认为 SSD 存储介质并且冷却时间为 
9999-12-31 23:59:59。
+:::
\ No newline at end of file
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/table-design/tiered-storage/remote-storage.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/table-design/tiered-storage/remote-storage.md
new file mode 100644
index 0000000000..21709eea7c
--- /dev/null
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/table-design/tiered-storage/remote-storage.md
@@ -0,0 +1,237 @@
+---
+{
+    "title": "远程存储",
+    "language": "zh-CN"
+}
+---
+
+<!-- 
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+## 需求场景
+
+未来一个很大的使用场景是类似于 ES 
日志存储,日志场景下数据会按照日期来切割数据,很多数据是冷数据,查询很少,需要降低这类数据的存储成本。从节约存储成本角度考虑:
+
+-   各云厂商普通云盘的价格都比对象存储贵
+
+-   在 Doris 集群实际线上使用中,普通云盘的利用率无法达到 100%
+
+-   云盘不是按需付费,而对象存储可以做到按需付费
+
+-   基于普通云盘做高可用,需要实现多副本,某副本异常要做副本迁移。而将数据放到对象存储上则不存在此类问题,因为对象存储是共享的。
+
+## 解决方案
+
+在 Partition 级别上设置 Freeze time,表示多久这个 Partition 会被 Freeze,并且定义 Freeze 之后存储的 
Remote storage 的位置。在 BE 上 daemon 线程会周期性的判断表是否需要 freeze,若 freeze 后会将数据上传到兼容 S3 
协议的对象存储和 HDFS 上。
+
+冷热分层支持所有 Doris 功能,只是把部分数据放到对象存储上,以节省成本,不牺牲功能。因此有如下特点:
+
+-   冷数据放到对象存储上,用户无需担心数据一致性和数据安全性问题
+-   灵活的 Freeze 策略,冷却远程存储 Property 可以应用到表和 Partition 级别
+
+-   用户查询数据,无需关注数据分布位置,若数据不在本地,会拉取对象上的数据,并 cache 到 BE 本地
+
+-   副本 clone 优化,若存储数据在对象上,则副本 clone 的时候不用去拉取存储数据到本地
+
+-   远程对象空间回收 recycler,若表、分区被删除,或者冷热分层过程中异常情况产生的空间浪费,则会有 recycler 
线程周期性的回收,节约存储资源
+
+-   cache 优化,将访问过的冷数据 cache 到 BE 本地,达到非冷热分层的查询性能
+
+-   BE 线程池优化,区分数据来源是本地还是对象存储,防止读取对象延时影响查询性能
+
+## Storage policy 的使用
+
+存储策略是使用冷热分层功能的入口,用户只需要在建表或使用 Doris 过程中,给表或分区关联上 Storage policy,即可以使用冷热分层的功能。
+
+:::tip
+创建 S3 RESOURCE 的时候,会进行 S3 远端的链接校验,以保证 RESOURCE 创建的正确。
+:::
+
+下面演示如何创建 S3 RESOURCE:
+
+```Plain
+CREATE RESOURCE "remote_s3"
+PROPERTIES
+(
+    "type" = "s3",
+    "s3.endpoint" = "bj.s3.com",
+    "s3.region" = "bj",
+    "s3.bucket" = "test-bucket",
+    "s3.root.path" = "path/to/root",
+    "s3.access_key" = "bbb",
+    "s3.secret_key" = "aaaa",
+    "s3.connection.maximum" = "50",
+    "s3.connection.request.timeout" = "3000",
+    "s3.connection.timeout" = "1000"
+);
+
+CREATE STORAGE POLICY test_policy
+PROPERTIES(
+    "storage_resource" = "remote_s3",
+    "cooldown_ttl" = "1d"
+);
+
+CREATE TABLE IF NOT EXISTS create_table_use_created_policy 
+(
+    k1 BIGINT,
+    k2 LARGEINT,
+    v1 VARCHAR(2048)
+)
+UNIQUE KEY(k1)
+DISTRIBUTED BY HASH (k1) BUCKETS 3
+PROPERTIES(
+    "storage_policy" = "test_policy"
+);
+```
+
+以及如何创建 HDFS RESOURCE:
+
+```Plain
+CREATE RESOURCE "remote_hdfs" PROPERTIES (
+        "type"="hdfs",
+        "fs.defaultFS"="fs_host:default_fs_port",
+        "hadoop.username"="hive",
+        "hadoop.password"="hive",
+        "dfs.nameservices" = "my_ha",
+        "dfs.ha.namenodes.my_ha" = "my_namenode1, my_namenode2",
+        "dfs.namenode.rpc-address.my_ha.my_namenode1" = "nn1_host:rpc_port",
+        "dfs.namenode.rpc-address.my_ha.my_namenode2" = "nn2_host:rpc_port",
+        "dfs.client.failover.proxy.provider.my_ha" = 
"org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider"
+    );
+
+    CREATE STORAGE POLICY test_policy PROPERTIES (
+        "storage_resource" = "remote_hdfs",
+        "cooldown_ttl" = "300"
+    )
+
+    CREATE TABLE IF NOT EXISTS create_table_use_created_policy (
+    k1 BIGINT,
+    k2 LARGEINT,
+    v1 VARCHAR(2048)
+    )
+    UNIQUE KEY(k1)
+    DISTRIBUTED BY HASH (k1) BUCKETS 3
+    PROPERTIES(
+    "storage_policy" = "test_policy"
+    );
+```
+
+或者对一个已存在的表,关联 Storage policy
+
+```Plain
+ALTER TABLE create_table_not_have_policy set ("storage_policy" = 
"test_policy");
+```
+
+或者对一个已存在的 partition,关联 Storage policy
+
+```Plain
+ALTER TABLE create_table_partition MODIFY PARTITION (*) 
SET("storage_policy"="test_policy");
+```
+
+:::tip
+注意,如果用户在建表时给整张 Table 和部分 Partition 指定了不同的 Storage Policy,Partition 设置的 Storage 
policy 会被无视,整张表的所有 Partition 都会使用 table 的 Policy. 如果您需要让某个 Partition 的 Policy 
和别的不同,则可以使用上文中对一个已存在的 Partition,关联 Storage policy 的方式修改。
+
+具体可以参考 Docs 
目录下[RESOURCE](../sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-RESOURCE)、
 
[POLICY](../sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-POLICY)、
 [CREATE 
TABLE](../sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-TABLE)、
 [ALTER 
TABLE](../sql-manual/sql-statements/Data-Definition-Statements/Alter/ALTER-TABLE-COLUMN)等文档,里面有详细介绍。
+:::
+
+### 一些限制
+
+-   单表或单 Partition 只能关联一个 Storage policy,关联后不能 Drop 掉 Storage 
policy,需要先解除二者的关联。
+
+-   Storage policy 关联的对象信息不支持修改数据存储 path 的信息,比如 bucket、endpoint、root_path 等信息
+
+-   Storage policy 支持创建 和修改和支持删除,删除前需要先保证没有表引用此 Storage policy。
+
+-   Unique 模型在开启 Merge-on-Write 特性时,不支持设置 Storage policy。
+
+## 冷数据占用对象大小
+
+方式一:通过 show proc '/backends'可以查看到每个 BE 上传到对象的大小,RemoteUsedCapacity 项,此方式略有延迟。
+
+方式二:通过 show tablets from tableName 可以查看到表的每个 tablet 占用的对象大小,RemoteDataSize 项。
+
+## 冷数据的 cache
+
+上文提到冷数据为了优化查询的性能和对象存储资源节省,引入了 cache 的概念。在冷却后首次命中,Doris 会将已经冷却的数据又重新加载到 BE 
的本地磁盘,cache 有以下特性:
+
+-   cache 实际存储于 BE 磁盘,不占用内存空间。
+
+-   cache 可以限制膨胀,通过 LRU 进行数据的清理
+
+-   cache 的实现和联邦查询 Catalog 的 cache 是同一套实现,文档参考[此处](../lakehouse/filecache)
+
+## 冷数据的 Compaction
+
+冷数据传入的时间是数据 rowset 文件写入本地磁盘时刻起,加上冷却时间。由于数据并不是一次性写入和冷却的,因此避免在对象存储内的小文件问题,Doris 
也会进行冷数据的 Compaction。但是,冷数据的 Compaction 的频次和资源占用的优先级并不是很高,也推荐本地热数据 compaction 
后再执行冷却。具体可以通过以下 BE 参数调整:
+
+-   BE 参数`cold_data_compaction_thread_num`可以设置执行冷数据的 Compaction 的并发,默认是 2。
+
+-   BE 参数`cold_data_compaction_interval_sec`可以设置执行冷数据的 Compaction 的时间间隔,默认是 
1800,单位:秒,即半个小时。。
+
+## 冷数据的 Schema Change
+
+数据冷却后支持 Schema Change 类型如下:
+
+-   增加、删除列
+
+-   修改列类型
+
+-   调整列顺序
+
+-   增加、修改索引
+
+## 冷数据的垃圾回收
+
+冷数据的垃圾数据是指没有被任何 Replica 使用的数据,对象存储上可能会有如下情况产生的垃圾数据:
+
+1.  上传 rowset 失败但是有部分 segment 上传成功。
+
+2.  FE 重新选 CooldownReplica 后,新旧 CooldownReplica 的 rowset version 
不一致,FollowerReplica 都去同步新 CooldownReplica 的 CooldownMeta,旧 CooldownReplica 中 
version 不一致的 rowset 没有 Replica 使用成为垃圾数据。
+
+3.  冷数据 Compaction 后,合并前的 rowset 因为还可能被其他 Replica 使用不能立即删除,但是最终 
FollowerReplica 都使用了最新的合并后的 rowset,合并前的 rowset 成为垃圾数据。
+
+另外,对象上的垃圾数据并不会立即清理掉。BE 
参数`remove_unused_remote_files_interval_sec`可以设置冷数据的垃圾回收的时间间隔,默认是 21600,单位:秒,即 6 
个小时。
+
+## 未尽事项
+
+-   一些远端占用指标更新获取不够完善
+
+## 常见问题
+
+1.  `ERROR 1105 (HY000): errCode = 2, detailMessage = Failed to create 
repository: connect to s3 failed: Unable to marshall request to JSON: host must 
not be null.`
+
+S3 SDK 默认使用 virtual-hosted style 方式。但某些对象存储系统 (如:minio) 可能没开启或没支持 
virtual-hosted style 方式的访问,此时我们可以添加 use_path_style 参数来强制使用 path style 方式:
+
+```Plain
+CREATE RESOURCE "remote_s3"
+PROPERTIES
+(
+    "type" = "s3",
+    "s3.endpoint" = "bj.s3.com",
+    "s3.region" = "bj",
+    "s3.bucket" = "test-bucket",
+    "s3.root.path" = "path/to/root",
+    "s3.access_key" = "bbb",
+    "s3.secret_key" = "aaaa",
+    "s3.connection.maximum" = "50",
+    "s3.connection.request.timeout" = "3000",
+    "s3.connection.timeout" = "1000",
+    "use_path_style" = "true"
+);
+```
+
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/sql-manual/sql-statements/Data-Definition-Statements/Alter/ALTER-TABLE-PROPERTY.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/sql-manual/sql-statements/Data-Definition-Statements/Alter/ALTER-TABLE-PROPERTY.md
index c3466b10ea..fcccb1e046 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/sql-manual/sql-statements/Data-Definition-Statements/Alter/ALTER-TABLE-PROPERTY.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/sql-manual/sql-statements/Data-Definition-Statements/Alter/ALTER-TABLE-PROPERTY.md
@@ -30,6 +30,17 @@ under the License.
 
 ALTER TABLE PROPERTY
 
+:::caution
+分区属性与表属性的一些区别
+- 
分区属性一般主要关注分桶数(buckets)、存储介质(storage_medium)、副本数(replication)、冷热分离存储策略(storage_policy);
+  - 对于已经创建的分区,可以使用alter table {tableName} modify partition({partitionName}) 
set ({key}={value}) 来修改,但是分桶数(buckets)不能修改;
+  - 对于未创建的动态分区(dynamic partition),可以使用alter table {tableName} set 
(dynamic_partition.{key} = {value}) 来修改其属性;
+  - 对于未创建的自动分区(auto partition),可以使用alter table {tableName} set ({key} = 
{value}) 来修改其属性;
+  - 若用户想修改分区的属性,需要修改已经创建分区的属性,同时也要修改未创建分区的属性
+- 除了上面几个属性,其他均为表级别属性
+- 
具体属性可以参考[建表属性](../../../../sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-TABLE.md#properties)
+:::
+
 ### Description
 
 该语句用于对已有 table 的 property 进行修改操作。这个操作是同步的,命令返回表示执行完毕。
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-DATABASE.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-DATABASE.md
index 32977bdf95..7da33873ff 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-DATABASE.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-DATABASE.md
@@ -68,6 +68,10 @@ CREATE DATABASE [IF NOT EXISTS] db_name
    );
    ```
 
+:::caution
+若建表语句属性key带replication_allocation或replication_num,则db的默认的副本分布策略不会生效
+:::
+
 ### Keywords
 
 ```text
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-TABLE.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-TABLE.md
index f9282ad3ea..d889d70c8f 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-TABLE.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-TABLE.md
@@ -482,17 +482,7 @@ UNIQUE KEY(k1, k2)
 
 * 动态分区相关
 
-    动态分区相关参数如下:
-
-    * `dynamic_partition.enable`: 用于指定表级别的动态分区功能是否开启。默认为 true。
-    * `dynamic_partition.time_unit:` 用于指定动态添加分区的时间单位,可选择为 
DAY(天),WEEK(周),MONTH(月),YEAR(年),HOUR(时)。
-    * `dynamic_partition.start`: 用于指定向前删除多少个分区。值必须小于 0。默认为 Integer.MIN_VALUE。
-    * `dynamic_partition.end`: 用于指定提前创建的分区数量。值必须大于 0。
-    * `dynamic_partition.prefix`: 用于指定创建的分区名前缀,例如分区名前缀为 p,则自动创建分区名为 p20200108。
-    * `dynamic_partition.buckets`: 用于指定自动创建的分区分桶数量。
-    * `dynamic_partition.create_history_partition`: 是否创建历史分区。
-    * `dynamic_partition.history_partition_num`: 指定创建历史分区的数量。
-    * `dynamic_partition.reserved_history_periods`: 用于指定保留的历史分区的时间段。
+    动态分区相关参考[分区分桶-动态分区](../../../../table-design/data-partition.md#动态分区)
 
 * `file_cache_ttl_seconds`:
 
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/table-design/data-partition.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/table-design/data-partition.md
index 070c5cef3d..fea5c380c3 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/table-design/data-partition.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/table-design/data-partition.md
@@ -384,7 +384,7 @@ ERROR 5025 (HY000): Insert has filtered data in strict 
mode, tracking_url=......
 
 动态分区只支持在 DATE/DATETIME 列上进行 Range 类型的分区。
 
-动态分区适用于分区列的时间数据随现实世界同步增长的情况。此时可以灵活的按照与现实世界同步的时间维度对数据进行分区,自动地根据设置对数据进行冷热分层或者回收。
+动态分区适用于分区列的时间数据随现实世界同步增长的情况。此时可以灵活的按照与现实世界同步的时间维度对数据进行分区。
 
 对于更为灵活,适用场景更多的数据入库分区,请参阅[自动分区](#自动分区)功能。
 
@@ -452,6 +452,11 @@ ERROR 5025 (HY000): Insert has filtered data in strict 
mode, tracking_url=......
 
   动态分区的起始偏移,为负数。根据 `time_unit` 
属性的不同,以当天(星期/月)为基准,分区范围在此偏移之前的分区将会被删除。如果不填写,则默认为 
`-2147483648`,即不删除历史分区。此偏移之后至当前时间的历史分区如不存在,是否创建取决于 
`dynamic_partition.create_history_partition`。
 
+
+:::caution
+  注意,若用户设置了history_partition_num(>0),创建动态分区的起始分区就会用max(start, 
-history_partition_num),删除历史分区的时候仍然会保留到start的范围,其中start < 0。
+:::
+
 - `dynamic_partition.end`**(必选参数)**
 
   动态分区的结束偏移,为正数。根据 `time_unit` 属性的不同,以当天(星期/月)为基准,提前创建对应范围的分区。
@@ -476,6 +481,8 @@ ERROR 5025 (HY000): Insert has filtered data in strict 
mode, tracking_url=......
 
   当 `time_unit` 为 `MONTH` 时,该参数用于指定每月的起始日期。取值为 1 到 28。其中 1 表示每月 1 号,28 表示每月 28 
号。默认为 1,即表示每月以 1 号为起始点。暂不支持以 29、30、31 号为起始日,以避免因闰年或闰月带来的歧义。
 
+- doris支持SSD和HDD层级存储,可参考[分层存储](./tiered-storage/diff-disk-medium-migration.md)
+
 - `dynamic_partition.create_history_partition`
 
   默认为 false。当置为 true 时,Doris 会自动创建所有分区,具体创建规则见下文。同时,FE 的参数 
`max_dynamic_partition_num` 会限制总分区数量,以避免一次性创建过多分区。当期望创建的分区个数大于 
`max_dynamic_partition_num` 值时,操作将被禁止。
@@ -486,26 +493,6 @@ ERROR 5025 (HY000): Insert has filtered data in strict 
mode, tracking_url=......
 
   当`create_history_partition` 为 `true` 时,该参数用于指定创建历史分区数量。默认值为 -1,即未设置。**该变量与 
`dynamic_partition.start` 作用相同,建议同时只设置一个。**
 
--  `dynamic_partition.hot_partition_num`
-
-  指定最新的多少个分区为热分区。对于热分区,系统会自动设置其 `storage_medium` 参数为 SSD,并且设置 
`storage_cooldown_time`。
-
-  注意:若存储路径下没有 SSD 磁盘路径,配置该参数会导致动态分区创建失败。
-
-  `hot_partition_num` 是往前 n 天和未来所有分区
-
-  我们举例说明。假设今天是 2021-05-20,按天分区,动态分区的属性设置为:hot_partition_num=2, end=3, 
start=-3。则系统会自动创建以下分区,并且设置 `storage_medium` 和 `storage_cooldown_time` 参数:
-
-  ```Plain
-  p20210517:["2021-05-17", "2021-05-18") storage_medium=HDD 
storage_cooldown_time=9999-12-31 23:59:59
-  p20210518:["2021-05-18", "2021-05-19") storage_medium=HDD 
storage_cooldown_time=9999-12-31 23:59:59
-  p20210519:["2021-05-19", "2021-05-20") storage_medium=SSD 
storage_cooldown_time=2021-05-21 00:00:00
-  p20210520:["2021-05-20", "2021-05-21") storage_medium=SSD 
storage_cooldown_time=2021-05-22 00:00:00
-  p20210521:["2021-05-21", "2021-05-22") storage_medium=SSD 
storage_cooldown_time=2021-05-23 00:00:00
-  p20210522:["2021-05-22", "2021-05-23") storage_medium=SSD 
storage_cooldown_time=2021-05-24 00:00:00
-  p20210523:["2021-05-23", "2021-05-24") storage_medium=SSD 
storage_cooldown_time=2021-05-25 00:00:00
-  ```
-
 -   `dynamic_partition.reserved_history_periods`
 
   需要保留的历史分区的时间范围。当`dynamic_partition.time_unit` 设置为 "DAY/WEEK/MONTH/YEAR" 
时,需要以 `[yyyy-MM-dd,yyyy-MM-dd],[...,...]` 格式进行设置。当`dynamic_partition.time_unit` 
设置为 "HOUR" 时,需要以 `[yyyy-MM-dd HH:mm:ss,yyyy-MM-dd HH:mm:ss],[...,...]` 
的格式来进行设置。如果不设置,默认为 `"NULL"`。
@@ -533,12 +520,6 @@ ERROR 5025 (HY000): Insert has filtered data in strict 
mode, tracking_url=......
 
   这两个时间段的分区。其中,`reserved_history_periods` 的每一个 `[...,...]` 
是一对设置项,两者需要同时被设置,且第一个时间不能大于第二个时间。
 
--   `dynamic_partition.storage_medium`
-
-  指定创建的动态分区的默认存储介质。默认是 HDD,可选择 SSD。
-
-  注意,当设置为 SSD 时,`hot_partition_num` 属性将不再生效,所有分区将默认为 SSD 存储介质并且冷却时间为 
9999-12-31 23:59:59。
-
 ### 创建历史分区规则
 
 当 create_history_partition 为 true,即开启创建历史分区功能时,Doris 会根据 
dynamic_partition.start 和 dynamic_partition.history_partition_num 来决定创建历史分区的个数。
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/table-design/tiered-storage/diff-disk-medium-migration.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/table-design/tiered-storage/diff-disk-medium-migration.md
new file mode 100644
index 0000000000..78bb012a51
--- /dev/null
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/table-design/tiered-storage/diff-disk-medium-migration.md
@@ -0,0 +1,62 @@
+---
+{
+    "title": "SSD 和 HDD 层级存储",
+    "language": "zh-CN"
+}
+---
+
+<!-- 
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+Doris支持(HDD、SSD)磁盘类型间数据根据数据冷热特性进行迁移,加速读写性能。用户可以设置分区参数将动态分区建在相应be的磁盘类型上
+
+其中dynamic_partition参数可以参考[分区分桶-动态分区](../../table-design/data-partition.md#动态分区)
+
+`dynamic_partition.hot_partition_num`
+
+:::caution
+  注意,dynamic_partition.storage_medium必须设置为HDD,否则hot_partition_num将不会生效
+:::
+
+  指定最新的多少个分区为热分区。对于热分区,系统会自动设置其 `storage_medium` 参数为 SSD,并且设置 
`storage_cooldown_time`。
+
+  注意:若存储路径下没有 SSD 磁盘路径,配置该参数会导致动态分区创建失败。
+
+  `hot_partition_num` 是往前 n 天和未来所有分区
+
+  我们举例说明。假设今天是 2021-05-20,按天分区,动态分区的属性设置为:hot_partition_num=2, end=3, 
start=-3。则系统会自动创建以下分区,并且设置 `storage_medium` 和 `storage_cooldown_time` 参数:
+
+  ```Plain
+  p20210517:["2021-05-17", "2021-05-18") storage_medium=HDD 
storage_cooldown_time=9999-12-31 23:59:59
+  p20210518:["2021-05-18", "2021-05-19") storage_medium=HDD 
storage_cooldown_time=9999-12-31 23:59:59
+  p20210519:["2021-05-19", "2021-05-20") storage_medium=SSD 
storage_cooldown_time=2021-05-21 00:00:00
+  p20210520:["2021-05-20", "2021-05-21") storage_medium=SSD 
storage_cooldown_time=2021-05-22 00:00:00
+  p20210521:["2021-05-21", "2021-05-22") storage_medium=SSD 
storage_cooldown_time=2021-05-23 00:00:00
+  p20210522:["2021-05-22", "2021-05-23") storage_medium=SSD 
storage_cooldown_time=2021-05-24 00:00:00
+  p20210523:["2021-05-23", "2021-05-24") storage_medium=SSD 
storage_cooldown_time=2021-05-25 00:00:00
+  ```
+
+
+-   `dynamic_partition.storage_medium`
+
+指定创建的动态分区的默认存储介质。默认是 HDD,可选择 SSD。
+
+:::caution
+  注意,当设置为 SSD 时,`hot_partition_num` 属性将不再生效,所有分区将默认为 SSD 存储介质并且冷却时间为 
9999-12-31 23:59:59。
+:::
\ No newline at end of file
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/table-design/tiered-storage/remote-storage.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/table-design/tiered-storage/remote-storage.md
new file mode 100644
index 0000000000..21709eea7c
--- /dev/null
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/table-design/tiered-storage/remote-storage.md
@@ -0,0 +1,237 @@
+---
+{
+    "title": "远程存储",
+    "language": "zh-CN"
+}
+---
+
+<!-- 
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+## 需求场景
+
+未来一个很大的使用场景是类似于 ES 
日志存储,日志场景下数据会按照日期来切割数据,很多数据是冷数据,查询很少,需要降低这类数据的存储成本。从节约存储成本角度考虑:
+
+-   各云厂商普通云盘的价格都比对象存储贵
+
+-   在 Doris 集群实际线上使用中,普通云盘的利用率无法达到 100%
+
+-   云盘不是按需付费,而对象存储可以做到按需付费
+
+-   基于普通云盘做高可用,需要实现多副本,某副本异常要做副本迁移。而将数据放到对象存储上则不存在此类问题,因为对象存储是共享的。
+
+## 解决方案
+
+在 Partition 级别上设置 Freeze time,表示多久这个 Partition 会被 Freeze,并且定义 Freeze 之后存储的 
Remote storage 的位置。在 BE 上 daemon 线程会周期性的判断表是否需要 freeze,若 freeze 后会将数据上传到兼容 S3 
协议的对象存储和 HDFS 上。
+
+冷热分层支持所有 Doris 功能,只是把部分数据放到对象存储上,以节省成本,不牺牲功能。因此有如下特点:
+
+-   冷数据放到对象存储上,用户无需担心数据一致性和数据安全性问题
+-   灵活的 Freeze 策略,冷却远程存储 Property 可以应用到表和 Partition 级别
+
+-   用户查询数据,无需关注数据分布位置,若数据不在本地,会拉取对象上的数据,并 cache 到 BE 本地
+
+-   副本 clone 优化,若存储数据在对象上,则副本 clone 的时候不用去拉取存储数据到本地
+
+-   远程对象空间回收 recycler,若表、分区被删除,或者冷热分层过程中异常情况产生的空间浪费,则会有 recycler 
线程周期性的回收,节约存储资源
+
+-   cache 优化,将访问过的冷数据 cache 到 BE 本地,达到非冷热分层的查询性能
+
+-   BE 线程池优化,区分数据来源是本地还是对象存储,防止读取对象延时影响查询性能
+
+## Storage policy 的使用
+
+存储策略是使用冷热分层功能的入口,用户只需要在建表或使用 Doris 过程中,给表或分区关联上 Storage policy,即可以使用冷热分层的功能。
+
+:::tip
+创建 S3 RESOURCE 的时候,会进行 S3 远端的链接校验,以保证 RESOURCE 创建的正确。
+:::
+
+下面演示如何创建 S3 RESOURCE:
+
+```Plain
+CREATE RESOURCE "remote_s3"
+PROPERTIES
+(
+    "type" = "s3",
+    "s3.endpoint" = "bj.s3.com",
+    "s3.region" = "bj",
+    "s3.bucket" = "test-bucket",
+    "s3.root.path" = "path/to/root",
+    "s3.access_key" = "bbb",
+    "s3.secret_key" = "aaaa",
+    "s3.connection.maximum" = "50",
+    "s3.connection.request.timeout" = "3000",
+    "s3.connection.timeout" = "1000"
+);
+
+CREATE STORAGE POLICY test_policy
+PROPERTIES(
+    "storage_resource" = "remote_s3",
+    "cooldown_ttl" = "1d"
+);
+
+CREATE TABLE IF NOT EXISTS create_table_use_created_policy 
+(
+    k1 BIGINT,
+    k2 LARGEINT,
+    v1 VARCHAR(2048)
+)
+UNIQUE KEY(k1)
+DISTRIBUTED BY HASH (k1) BUCKETS 3
+PROPERTIES(
+    "storage_policy" = "test_policy"
+);
+```
+
+以及如何创建 HDFS RESOURCE:
+
+```Plain
+CREATE RESOURCE "remote_hdfs" PROPERTIES (
+        "type"="hdfs",
+        "fs.defaultFS"="fs_host:default_fs_port",
+        "hadoop.username"="hive",
+        "hadoop.password"="hive",
+        "dfs.nameservices" = "my_ha",
+        "dfs.ha.namenodes.my_ha" = "my_namenode1, my_namenode2",
+        "dfs.namenode.rpc-address.my_ha.my_namenode1" = "nn1_host:rpc_port",
+        "dfs.namenode.rpc-address.my_ha.my_namenode2" = "nn2_host:rpc_port",
+        "dfs.client.failover.proxy.provider.my_ha" = 
"org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider"
+    );
+
+    CREATE STORAGE POLICY test_policy PROPERTIES (
+        "storage_resource" = "remote_hdfs",
+        "cooldown_ttl" = "300"
+    )
+
+    CREATE TABLE IF NOT EXISTS create_table_use_created_policy (
+    k1 BIGINT,
+    k2 LARGEINT,
+    v1 VARCHAR(2048)
+    )
+    UNIQUE KEY(k1)
+    DISTRIBUTED BY HASH (k1) BUCKETS 3
+    PROPERTIES(
+    "storage_policy" = "test_policy"
+    );
+```
+
+或者对一个已存在的表,关联 Storage policy
+
+```Plain
+ALTER TABLE create_table_not_have_policy set ("storage_policy" = 
"test_policy");
+```
+
+或者对一个已存在的 partition,关联 Storage policy
+
+```Plain
+ALTER TABLE create_table_partition MODIFY PARTITION (*) 
SET("storage_policy"="test_policy");
+```
+
+:::tip
+注意,如果用户在建表时给整张 Table 和部分 Partition 指定了不同的 Storage Policy,Partition 设置的 Storage 
policy 会被无视,整张表的所有 Partition 都会使用 table 的 Policy. 如果您需要让某个 Partition 的 Policy 
和别的不同,则可以使用上文中对一个已存在的 Partition,关联 Storage policy 的方式修改。
+
+具体可以参考 Docs 
目录下[RESOURCE](../sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-RESOURCE)、
 
[POLICY](../sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-POLICY)、
 [CREATE 
TABLE](../sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-TABLE)、
 [ALTER 
TABLE](../sql-manual/sql-statements/Data-Definition-Statements/Alter/ALTER-TABLE-COLUMN)等文档,里面有详细介绍。
+:::
+
+### 一些限制
+
+-   单表或单 Partition 只能关联一个 Storage policy,关联后不能 Drop 掉 Storage 
policy,需要先解除二者的关联。
+
+-   Storage policy 关联的对象信息不支持修改数据存储 path 的信息,比如 bucket、endpoint、root_path 等信息
+
+-   Storage policy 支持创建 和修改和支持删除,删除前需要先保证没有表引用此 Storage policy。
+
+-   Unique 模型在开启 Merge-on-Write 特性时,不支持设置 Storage policy。
+
+## 冷数据占用对象大小
+
+方式一:通过 show proc '/backends'可以查看到每个 BE 上传到对象的大小,RemoteUsedCapacity 项,此方式略有延迟。
+
+方式二:通过 show tablets from tableName 可以查看到表的每个 tablet 占用的对象大小,RemoteDataSize 项。
+
+## 冷数据的 cache
+
+上文提到冷数据为了优化查询的性能和对象存储资源节省,引入了 cache 的概念。在冷却后首次命中,Doris 会将已经冷却的数据又重新加载到 BE 
的本地磁盘,cache 有以下特性:
+
+-   cache 实际存储于 BE 磁盘,不占用内存空间。
+
+-   cache 可以限制膨胀,通过 LRU 进行数据的清理
+
+-   cache 的实现和联邦查询 Catalog 的 cache 是同一套实现,文档参考[此处](../lakehouse/filecache)
+
+## 冷数据的 Compaction
+
+冷数据传入的时间是数据 rowset 文件写入本地磁盘时刻起,加上冷却时间。由于数据并不是一次性写入和冷却的,因此避免在对象存储内的小文件问题,Doris 
也会进行冷数据的 Compaction。但是,冷数据的 Compaction 的频次和资源占用的优先级并不是很高,也推荐本地热数据 compaction 
后再执行冷却。具体可以通过以下 BE 参数调整:
+
+-   BE 参数`cold_data_compaction_thread_num`可以设置执行冷数据的 Compaction 的并发,默认是 2。
+
+-   BE 参数`cold_data_compaction_interval_sec`可以设置执行冷数据的 Compaction 的时间间隔,默认是 
1800,单位:秒,即半个小时。。
+
+## 冷数据的 Schema Change
+
+数据冷却后支持 Schema Change 类型如下:
+
+-   增加、删除列
+
+-   修改列类型
+
+-   调整列顺序
+
+-   增加、修改索引
+
+## 冷数据的垃圾回收
+
+冷数据的垃圾数据是指没有被任何 Replica 使用的数据,对象存储上可能会有如下情况产生的垃圾数据:
+
+1.  上传 rowset 失败但是有部分 segment 上传成功。
+
+2.  FE 重新选 CooldownReplica 后,新旧 CooldownReplica 的 rowset version 
不一致,FollowerReplica 都去同步新 CooldownReplica 的 CooldownMeta,旧 CooldownReplica 中 
version 不一致的 rowset 没有 Replica 使用成为垃圾数据。
+
+3.  冷数据 Compaction 后,合并前的 rowset 因为还可能被其他 Replica 使用不能立即删除,但是最终 
FollowerReplica 都使用了最新的合并后的 rowset,合并前的 rowset 成为垃圾数据。
+
+另外,对象上的垃圾数据并不会立即清理掉。BE 
参数`remove_unused_remote_files_interval_sec`可以设置冷数据的垃圾回收的时间间隔,默认是 21600,单位:秒,即 6 
个小时。
+
+## 未尽事项
+
+-   一些远端占用指标更新获取不够完善
+
+## 常见问题
+
+1.  `ERROR 1105 (HY000): errCode = 2, detailMessage = Failed to create 
repository: connect to s3 failed: Unable to marshall request to JSON: host must 
not be null.`
+
+S3 SDK 默认使用 virtual-hosted style 方式。但某些对象存储系统 (如:minio) 可能没开启或没支持 
virtual-hosted style 方式的访问,此时我们可以添加 use_path_style 参数来强制使用 path style 方式:
+
+```Plain
+CREATE RESOURCE "remote_s3"
+PROPERTIES
+(
+    "type" = "s3",
+    "s3.endpoint" = "bj.s3.com",
+    "s3.region" = "bj",
+    "s3.bucket" = "test-bucket",
+    "s3.root.path" = "path/to/root",
+    "s3.access_key" = "bbb",
+    "s3.secret_key" = "aaaa",
+    "s3.connection.maximum" = "50",
+    "s3.connection.request.timeout" = "3000",
+    "s3.connection.timeout" = "1000",
+    "use_path_style" = "true"
+);
+```
+
diff --git 
a/versioned_docs/version-2.1/sql-manual/sql-statements/Data-Definition-Statements/Alter/ALTER-TABLE-PROPERTY.md
 
b/versioned_docs/version-2.1/sql-manual/sql-statements/Data-Definition-Statements/Alter/ALTER-TABLE-PROPERTY.md
index 0a0c669286..741754ba58 100644
--- 
a/versioned_docs/version-2.1/sql-manual/sql-statements/Data-Definition-Statements/Alter/ALTER-TABLE-PROPERTY.md
+++ 
b/versioned_docs/version-2.1/sql-manual/sql-statements/Data-Definition-Statements/Alter/ALTER-TABLE-PROPERTY.md
@@ -30,6 +30,17 @@ under the License.
 
 ALTER TABLE PROPERTY
 
+:::caution
+Differences between Partition Attributes and Table Attributes
+- Partition attributes generally focus on the number of buckets (buckets), 
storage medium (storage_medium), replication num (replication_num), and 
hot/cold data separation policies (storage_policy).
+  - For existing partitions, you can use ALTER TABLE {tableName} MODIFY 
PARTITION ({partitionName}) SET ({key}={value}) to modify them, but the number 
of buckets (buckets) cannot be changed.
+  - For not-created dynamic partitions, you can use ALTER TABLE {tableName} 
SET (dynamic_partition.{key} = {value}) to modify their attributes.
+  - For not-created auto partitions, you can use ALTER TABLE {tableName} SET 
({key} = {value}) to modify their attributes.
+  - If users want to modify partition attributes, they need to modify the 
attributes of the already created partitions, as well as the attributes of 
not-created partitions.
+- Aside from the above attributes, all others are at the table level.
+- For the specific attributes, please refer to [create table 
attributes](../../../../sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-TABLE.md#properties)
+:::
+
 ### Description
 
 This statement is used to modify the properties of an existing table. This 
operation is synchronous, and the return of the command indicates the 
completion of the execution.
diff --git 
a/versioned_docs/version-2.1/sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-DATABASE.md
 
b/versioned_docs/version-2.1/sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-DATABASE.md
index 56ca94ef7f..2856206d36 100644
--- 
a/versioned_docs/version-2.1/sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-DATABASE.md
+++ 
b/versioned_docs/version-2.1/sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-DATABASE.md
@@ -68,6 +68,10 @@ CREATE DATABASE [IF NOT EXISTS] db_name
    );
    ```
 
+:::caution
+If the create table statement has attributes replication_allocation or 
replication_num, then the default replica distribution policy of the database 
will not take effect.
+:::
+
 ### Keywords
 
 ```text
diff --git 
a/versioned_docs/version-2.1/sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-TABLE.md
 
b/versioned_docs/version-2.1/sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-TABLE.md
index 00faf9ce29..ef7f4c4e88 100644
--- 
a/versioned_docs/version-2.1/sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-TABLE.md
+++ 
b/versioned_docs/version-2.1/sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-TABLE.md
@@ -491,17 +491,7 @@ Set table properties. The following attributes are 
currently supported:
 
 * Dynamic partition related
 
-   The relevant parameters of dynamic partition are as follows:
-
-* `dynamic_partition.enable`: Used to specify whether the dynamic partition 
function at the table level is enabled. The default is true.
-* `dynamic_partition.time_unit:` is used to specify the time unit for 
dynamically adding partitions, which can be selected as DAY (day), WEEK (week), 
MONTH (month), YEAR (year), HOUR (hour).
-* `dynamic_partition.start`: Used to specify how many partitions to delete 
forward. The value must be less than 0. The default is Integer.MIN_VALUE.
-* `dynamic_partition.end`: Used to specify the number of partitions created in 
advance. The value must be greater than 0.
-* `dynamic_partition.prefix`: Used to specify the partition name prefix to be 
created. For example, if the partition name prefix is ​​p, the partition name 
will be automatically created as p20200108.
-* `dynamic_partition.buckets`: Used to specify the number of partition buckets 
that are automatically created.
-* `dynamic_partition.create_history_partition`: Whether to create a history 
partition.
-* `dynamic_partition.history_partition_num`: Specify the number of historical 
partitions to be created.
-* `dynamic_partition.reserved_history_periods`: Used to specify the range of 
reserved history periods.
+References related to dynamic partitioning[Data Partitioning-Dynamic 
partitioning](../../../../table-design/data-partition.md#Dynamic partitioning)
     
 ### Example
 
diff --git a/versioned_docs/version-2.1/table-design/data-partition.md 
b/versioned_docs/version-2.1/table-design/data-partition.md
index cab5df5052..97d2f40072 100644
--- a/versioned_docs/version-2.1/table-design/data-partition.md
+++ b/versioned_docs/version-2.1/table-design/data-partition.md
@@ -421,6 +421,11 @@ The rules of dynamic partition are prefixed with 
`dynamic_partition.`:
 
   The starting offset of the dynamic partition, usually a negative number. 
Depending on the `time_unit` attribute, based on the current day (week / 
month), the partitions with a partition range before this offset will be 
deleted. If not filled, the default is `-2147483648`, that is, the history 
partition will not be  deleted.
 
+
+:::caution
+Note that if the user sets history_partition_num (> 0), the starting partition 
for creating dynamic partitions will use max(start, -history_partition_num), 
and when deleting historical partitions, the range up to start will still be 
retained, where start < 0.
+:::
+
 - `dynamic_partition.end`(required parameters)
 
   The end offset of the dynamic partition, usually a positive number. 
According to the difference of the `time_unit` attribute, the partition of the 
corresponding range is created in advance based on the current day (week / 
month).
@@ -448,6 +453,7 @@ The rules of dynamic partition are prefixed with 
`dynamic_partition.`:
 
   When `time_unit` is` MONTH`, this parameter is used to specify the start 
date of each month. The value ranges from 1 to 28. 1 means the 1st of every 
month, and 28 means the 28th of every month. The default is 1, which means that 
every month starts at 1st. The 29, 30 and 31 are not supported at the moment to 
avoid ambiguity caused by lunar years or months.
 
+- Doris supports multi-level storage with SSD and HDD tiers. For more details, 
please refer to [tiered-storage](./tiered-storage/diff-disk-medium-migration.md)
 
 - `dynamic_partition.create_history_partition`
 
@@ -459,30 +465,6 @@ The rules of dynamic partition are prefixed with 
`dynamic_partition.`:
 
   When `create_history_partition` is `true`, this parameter is used to specify 
the number of history partitions. The default value is -1, which means it is 
not set.
 
-- `dynamic_partition.hot_partition_num`
-
-  Specify how many of the latest partitions are hot partitions. For hot 
partition, the system will automatically set its `storage_medium` parameter to 
SSD, and set `storage_cooldown_time`.
-
-  :::tip
-
-  If there is no SSD disk path under the storage path, configuring this 
parameter will cause dynamic partition creation to fail.
-
-  :::
-
-  `hot_partition_num` is all partitions in the previous n days and in the 
future.
-
-  Let us give an example. Suppose today is 2021-05-20, partition by day, and 
the properties of dynamic partition are set to: hot_partition_num=2, end=3, 
start=-3. Then the system will automatically create the following partitions, 
and set the `storage_medium` and `storage_cooldown_time` properties:
-
-  ```sql
-  p20210517: ["2021-05-17", "2021-05-18") storage_medium=HDD 
storage_cooldown_time=9999-12-31 23:59:59
-  p20210518: ["2021-05-18", "2021-05-19") storage_medium=HDD 
storage_cooldown_time=9999-12-31 23:59:59
-  p20210519: ["2021-05-19", "2021-05-20") storage_medium=SSD 
storage_cooldown_time=2021-05-21 00:00:00
-  p20210520: ["2021-05-20", "2021-05-21") storage_medium=SSD 
storage_cooldown_time=2021-05-22 00:00:00
-  p20210521: ["2021-05-21", "2021-05-22") storage_medium=SSD 
storage_cooldown_time=2021-05-23 00:00:00
-  p20210522: ["2021-05-22", "2021-05-23") storage_medium=SSD 
storage_cooldown_time=2021-05-24 00:00:00
-  p20210523: ["2021-05-23", "2021-05-24") storage_medium=SSD 
storage_cooldown_time=2021-05-25 00:00:00
-  ```
-
 
 - `dynamic_partition.reserved_history_periods`
 
@@ -512,17 +494,6 @@ The rules of dynamic partition are prefixed with 
`dynamic_partition.`:
   Otherwise, every `[...,...]` in `reserved_history_periods` is a couple of 
properties, and they should be set at the same time. And the first date can't 
be larger than the second one.
 
 
-- `dynamic_partition.storage_medium`
-
-  
-  :::info Note
-  This parameteres is supported since Doris version 1.2.3
-  :::
-
-  Specifies the default storage medium for the created dynamic partition. HDD 
is the default, SSD can be selected.
-
-  Note that when set to SSD, the `hot_partition_num` property will no longer 
take effect, all partitions will default to SSD storage media and the cooldown 
time will be 9999-12-31 23:59:59.
-
 ### Create history partition rules
 
 When `create_history_partition` is `true`, i.e. history partition creation is 
enabled, Doris determines the number of history partitions to be created based 
on `dynamic_partition.start` and `dynamic_partition.history_partition_num`. 
diff --git 
a/versioned_docs/version-2.1/table-design/tiered-storage/diff-disk-medium-migration.md
 
b/versioned_docs/version-2.1/table-design/tiered-storage/diff-disk-medium-migration.md
new file mode 100644
index 0000000000..1201bba2b0
--- /dev/null
+++ 
b/versioned_docs/version-2.1/table-design/tiered-storage/diff-disk-medium-migration.md
@@ -0,0 +1,66 @@
+---
+{
+"title": "SSD and HDD tiered storage",
+"language": "en"
+}
+---
+
+<!-- 
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+It is possible to set parameters to create dynamic partitions on the 
corresponding disk types, while supporting data migration between (HDD, SSD) 
disk types based on the data's hot and cold characteristics, which can 
accelerate the read and write performance of Doris.
+
+- `dynamic_partition.hot_partition_num`
+
+  Doris supports data migration between different disk types (HDD, SSD) based 
on the cold/hot characteristics of the data, which can accelerate read and 
write performance. Users can set partition parameters to create dynamic 
partitions on the corresponding disk types.
+
+  For the 'dynamic_partition' parameter, please refer to 
[data-partition](../../table-design/data-partition.md#dynamic-partitioning)."
+
+
+
+  :::tip
+
+  If there is no SSD disk path under the storage path, configuring this 
parameter will cause dynamic partition creation to fail.
+
+  :::
+
+  `hot_partition_num` is all partitions in the previous n days and in the 
future.
+
+  Let us give an example. Suppose today is 2021-05-20, partition by day, and 
the properties of dynamic partition are set to: hot_partition_num=2, end=3, 
start=-3. Then the system will automatically create the following partitions, 
and set the `storage_medium` and `storage_cooldown_time` properties:
+
+  ```sql
+  p20210517: ["2021-05-17", "2021-05-18") storage_medium=HDD 
storage_cooldown_time=9999-12-31 23:59:59
+  p20210518: ["2021-05-18", "2021-05-19") storage_medium=HDD 
storage_cooldown_time=9999-12-31 23:59:59
+  p20210519: ["2021-05-19", "2021-05-20") storage_medium=SSD 
storage_cooldown_time=2021-05-21 00:00:00
+  p20210520: ["2021-05-20", "2021-05-21") storage_medium=SSD 
storage_cooldown_time=2021-05-22 00:00:00
+  p20210521: ["2021-05-21", "2021-05-22") storage_medium=SSD 
storage_cooldown_time=2021-05-23 00:00:00
+  p20210522: ["2021-05-22", "2021-05-23") storage_medium=SSD 
storage_cooldown_time=2021-05-24 00:00:00
+  p20210523: ["2021-05-23", "2021-05-24") storage_medium=SSD 
storage_cooldown_time=2021-05-25 00:00:00
+  ```
+
+- `dynamic_partition.storage_medium`
+
+  
+  :::info Note
+  This parameteres is supported since Doris version 1.2.3
+  :::
+
+  Specifies the default storage medium for the created dynamic partition. HDD 
is the default, SSD can be selected.
+
+  Note that when set to SSD, the `hot_partition_num` property will no longer 
take effect, all partitions will default to SSD storage media and the cooldown 
time will be 9999-12-31 23:59:59.
\ No newline at end of file
diff --git 
a/versioned_docs/version-2.1/table-design/tiered-storage/remote-storage.md 
b/versioned_docs/version-2.1/table-design/tiered-storage/remote-storage.md
new file mode 100644
index 0000000000..5c2645dccc
--- /dev/null
+++ b/versioned_docs/version-2.1/table-design/tiered-storage/remote-storage.md
@@ -0,0 +1,238 @@
+---
+{
+"title": "Remote Storage",
+"language": "en"
+}
+---
+
+<!-- 
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+## Use Case
+
+One significant use case in the future is similar to ES log storage, where 
data in the log scenario is split based on dates. Many of the data are cold 
data with infrequent queries, requiring a reduction in storage costs for such 
data. Considering cost-saving:
+
+- The pricing of regular cloud disks from various vendors is more expensive 
than object storage.
+
+- In actual online usage of the Doris Cluster, the utilization of regular 
cloud disks cannot reach 100%.
+
+- Cloud disks are not billed on demand, while object storage can be billed on 
demand.
+
+- Using regular cloud disks for high availability requires multiple replicas 
and replica migration in case of failures. In contrast, storing data on object 
storage eliminates these issues as it is shared.
+
+## Solution
+
+Set the freeze time at the partition level, which indicates how long a 
partition will be frozen, and define the location of remote storage for storing 
data after freezing. In the BE (Backend) daemon thread, the table's freeze 
condition is periodically checked. If a freeze condition is met, the data will 
be uploaded to object storage compatible with the S3 protocol and HDFS.
+
+Cold-hot tiering supports all Doris functionalities and only moves some data 
to object storage to save costs without sacrificing functionality. Therefore, 
it has the following characteristics:
+
+- Cold data is stored on object storage, and users do not need to worry about 
data consistency and security.
+
+- Flexible freeze strategy, where the cold remote storage property can be 
applied to both table and partition levels.
+
+- Users can query data without worrying about data distribution. If the data 
is not local, it will be pulled from the object storage and cached locally in 
the BE (Backend).
+
+- Replica clone optimization. If the stored data is on object storage, there 
is no need to fetch the stored data locally during replica cloning.
+
+- Remote object space recycling. If a table or partition is deleted or if 
space waste occurs during the cold-hot tiering process due to exceptional 
situations, a recycler thread will periodically recycle the space, saving 
storage resources.
+
+- Cache optimization, caching accessed cold data locally in the BE to achieve 
query performance similar to non-cold-hot tiering.
+
+- BE thread pool optimization, distinguishing between data sources from local 
and object storage to prevent delays in reading objects from impacting query 
performance.
+
+## Usage of Storage Policy
+
+The storage policy is the entry point for using the cold-hot tiering feature. 
Users only need to associate the storage policy with a table or partition 
during table creation or when using Doris.
+
+:::tip
+When creating an S3 resource, a remote S3 connection validation is performed 
to ensure the correct creation of the resource.
+:::
+
+Here is an example of creating an S3 resource:
+
+```Plain
+CREATE RESOURCE "remote_s3"
+PROPERTIES
+(
+    "type" = "s3",
+    "s3.endpoint" = "bj.s3.com",
+    "s3.region" = "bj",
+    "s3.bucket" = "test-bucket",
+    "s3.root.path" = "path/to/root",
+    "s3.access_key" = "bbb",
+    "s3.secret_key" = "aaaa",
+    "s3.connection.maximum" = "50",
+    "s3.connection.request.timeout" = "3000",
+    "s3.connection.timeout" = "1000"
+);
+
+CREATE STORAGE POLICY test_policy
+PROPERTIES(
+    "storage_resource" = "remote_s3",
+    "cooldown_ttl" = "1d"
+);
+
+CREATE TABLE IF NOT EXISTS create_table_use_created_policy 
+(
+    k1 BIGINT,
+    k2 LARGEINT,
+    v1 VARCHAR(2048)
+)
+UNIQUE KEY(k1)
+DISTRIBUTED BY HASH (k1) BUCKETS 3
+PROPERTIES(
+    "storage_policy" = "test_policy"
+);
+```
+
+And here is an example of creating an HDFS resource:
+
+```Plain
+CREATE RESOURCE "remote_hdfs" PROPERTIES (
+        "type"="hdfs",
+        "fs.defaultFS"="fs_host:default_fs_port",
+        "hadoop.username"="hive",
+        "hadoop.password"="hive",
+        "dfs.nameservices" = "my_ha",
+        "dfs.ha.namenodes.my_ha" = "my_namenode1, my_namenode2",
+        "dfs.namenode.rpc-address.my_ha.my_namenode1" = "nn1_host:rpc_port",
+        "dfs.namenode.rpc-address.my_ha.my_namenode2" = "nn2_host:rpc_port",
+        "dfs.client.failover.proxy.provider.my_ha" = 
"org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider"
+    );
+
+CREATE STORAGE POLICY test_policy PROPERTIES (
+    "storage_resource" = "remote_hdfs",
+    "cooldown_ttl" = "300"
+)
+
+CREATE TABLE IF NOT EXISTS create_table_use_created_policy (
+    k1 BIGINT,
+    k2 LARGEINTv1 VARCHAR(2048)
+)
+UNIQUE KEY(k1)
+DISTRIBUTED BY HASH (k1) BUCKETS 3
+PROPERTIES(
+    "storage_policy" = "test_policy"
+);
+```
+
+Associate a storage policy with an existing table by using the following 
command:
+
+```Plain
+ALTER TABLE create_table_not_have_policy SET ("storage_policy" = 
"test_policy");
+```
+
+Associate a storage policy with an existing partition by using the following 
command:
+
+```Plain
+ALTER TABLE create_table_partition MODIFY PARTITION (*) SET ("storage_policy" 
= "test_policy");
+```
+
+:::tip
+If you specify different storage policies for the entire table and some 
partitions during table creation, the storage policy set for the partitions 
will be ignored, and all partitions of the table will use the table's storage 
policy. If you want a specific partition to have a different storage policy 
than the others, you can use the method mentioned above to modify the 
association for that specific partition.
+
+For more details, please refer to the following documents in the Docs 
directory: 
[RESOURCE](../sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-RESOURCE),
 
[POLICY](../sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-POLICY),
 [CREATE 
TABLE](../sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-TABLE),
 [ALTER 
TABLE](../sql-manual/sql-statements/Data-Definition-Statements/Alter/ALTER-TABLE-COLUMN),
 which provide detailed explanations.
+:::
+
+### Limitations
+
+- A single table or partition can only be associated with one storage policy. 
Once associated, the storage policy cannot be dropped without first removing 
the association between them.
+
+- The object information associated with a storage policy does not support 
modifying the data storage path, such as bucket, endpoint, root_path, and other 
information.
+
+- Storage policies support creation, modification, and deletion. Before 
deleting a storage policy, ensure that no tables are referencing the storage 
policy.
+
+- When the Merge-on-Write feature is enabled, the Unique model does not 
support setting a storage policy.
+
+
+## Occupied Size of Cold Data Objects
+
+Method 1: You can use the `show proc '/backends'` command to view the size of 
each backend's uploaded objects. Look for the `RemoteUsedCapacity` field. 
Please note that this method may have some latency.
+
+Method 2: You can use the `show tablets from tableName` command to view the 
size of each tablet in a table, indicated by the `RemoteDataSize` field.
+
+## Cache for Cold Data
+
+As mentioned earlier, caching is introduced for cold data to optimize query 
performance and save object storage resources. When cold data is first accessed 
after cooling, Doris reloads the cooled data onto the local disk of the backend 
(BE). The cold data cache has the following characteristics:
+
+- The cache is stored on the BE's disk and does not occupy memory space.
+
+- The cache can be limited in size and uses LRU (Least Recently Used) for data 
eviction.
+
+- The implementation of the cache for cold data is the same as the cache for 
federated query catalog. Please refer to the documentation at 
[Filecache](../lakehouse/filecache) for more details.
+
+## Compaction of Cold Data
+
+The time at which cold data enters is counted from the moment the data rowset 
file is written to the local disk, plus the cooling duration. Since data is not 
written and cooled all at once, Doris performs compaction on cold data to avoid 
the issue of small files within object storage. However, the frequency and 
resource prioritization of cold data compaction are not very high. It is 
recommended to perform compaction on local hot data before cooling. You can 
adjust the following BE parameters:
+
+- The BE parameter `cold_data_compaction_thread_num` sets the concurrency for 
cold data compaction. The default value is 2.
+
+- The BE parameter `cold_data_compaction_interval_sec` sets the time interval 
for cold data compaction. The default value is 1800 seconds (30 minutes).
+
+## Schema Change for Cold Data
+
+The following schema change types are supported for cold data:
+
+- Adding or deleting columns
+
+- Modifying column types
+
+- Adjusting column order
+
+- Adding or modifying indexes
+
+## Garbage Collection of Cold Data
+
+Garbage data for cold data refers to data that is not used by any replica. The 
following situations may generate garbage data on object storage:
+
+1. Partial segment upload succeeds while the upload of the rowset fails.
+
+2. After the FE reselects the CooldownReplica, the rowset versions of the old 
and new CooldownReplica do not match. FollowerReplicas synchronize the 
CooldownMeta of the new CooldownReplica, and the rowsets with inconsistent 
versions in the old CooldownReplica become garbage data.
+
+3. After cold data compaction, the rowsets before merging cannot be 
immediately deleted because they may still be used by other replicas. However, 
eventually, all FollowerReplicas use the latest merged rowset, and the rowsets 
before merging become garbage data.
+
+Furthermore, the garbage data on objects is not immediately cleaned up. The BE 
parameter `remove_unused_remote_files_interval_sec` sets the time interval for 
garbage collection of cold data. The default value is 21600 seconds (6 hours).
+
+## TODOs
+
+- Some remote occupancy metrics may not have comprehensive update retrieval.
+
+## FAQs
+
+1. `ERROR 1105 (HY000): errCode = 2, detailMessage = Failed to create 
repository: connect to s3 failed: Unable to marshall request to JSON: host must 
not be null.`
+
+The S3 SDK defaults to using the virtual-hosted style. However, some object 
storage systems (e.g., MinIO) may not have virtual-hosted style access enabled 
or supported. In such cases, you can add the `use_path_style` parameter to 
force the use of path-style access:
+
+```Plain
+CREATE RESOURCE "remote_s3"
+PROPERTIES
+(
+    "type" = "s3",
+    "s3.endpoint" = "bj.s3.com",
+    "s3.region" = "bj",
+    "s3.bucket" = "test-bucket",
+    "s3.root.path" = "path/to/root",
+    "s3.access_key" = "bbb",
+    "s3.secret_key" = "aaaa",
+    "s3.connection.maximum" = "50",
+    "s3.connection.request.timeout" = "3000",
+    "s3.connection.timeout" = "1000",
+    "use_path_style" = "true"
+);
+```
\ No newline at end of file
diff --git 
a/versioned_docs/version-3.0/sql-manual/sql-statements/Data-Definition-Statements/Alter/ALTER-TABLE-PROPERTY.md
 
b/versioned_docs/version-3.0/sql-manual/sql-statements/Data-Definition-Statements/Alter/ALTER-TABLE-PROPERTY.md
index 0a0c669286..741754ba58 100644
--- 
a/versioned_docs/version-3.0/sql-manual/sql-statements/Data-Definition-Statements/Alter/ALTER-TABLE-PROPERTY.md
+++ 
b/versioned_docs/version-3.0/sql-manual/sql-statements/Data-Definition-Statements/Alter/ALTER-TABLE-PROPERTY.md
@@ -30,6 +30,17 @@ under the License.
 
 ALTER TABLE PROPERTY
 
+:::caution
+Differences between Partition Attributes and Table Attributes
+- Partition attributes generally focus on the number of buckets (buckets), 
storage medium (storage_medium), replication num (replication_num), and 
hot/cold data separation policies (storage_policy).
+  - For existing partitions, you can use ALTER TABLE {tableName} MODIFY 
PARTITION ({partitionName}) SET ({key}={value}) to modify them, but the number 
of buckets (buckets) cannot be changed.
+  - For not-created dynamic partitions, you can use ALTER TABLE {tableName} 
SET (dynamic_partition.{key} = {value}) to modify their attributes.
+  - For not-created auto partitions, you can use ALTER TABLE {tableName} SET 
({key} = {value}) to modify their attributes.
+  - If users want to modify partition attributes, they need to modify the 
attributes of the already created partitions, as well as the attributes of 
not-created partitions.
+- Aside from the above attributes, all others are at the table level.
+- For the specific attributes, please refer to [create table 
attributes](../../../../sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-TABLE.md#properties)
+:::
+
 ### Description
 
 This statement is used to modify the properties of an existing table. This 
operation is synchronous, and the return of the command indicates the 
completion of the execution.
diff --git 
a/versioned_docs/version-3.0/sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-DATABASE.md
 
b/versioned_docs/version-3.0/sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-DATABASE.md
index 56ca94ef7f..2856206d36 100644
--- 
a/versioned_docs/version-3.0/sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-DATABASE.md
+++ 
b/versioned_docs/version-3.0/sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-DATABASE.md
@@ -68,6 +68,10 @@ CREATE DATABASE [IF NOT EXISTS] db_name
    );
    ```
 
+:::caution
+If the create table statement has attributes replication_allocation or 
replication_num, then the default replica distribution policy of the database 
will not take effect.
+:::
+
 ### Keywords
 
 ```text
diff --git 
a/versioned_docs/version-3.0/sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-TABLE.md
 
b/versioned_docs/version-3.0/sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-TABLE.md
index ad696781e7..26b9f1ac61 100644
--- 
a/versioned_docs/version-3.0/sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-TABLE.md
+++ 
b/versioned_docs/version-3.0/sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-TABLE.md
@@ -483,17 +483,7 @@ Set table properties. The following attributes are 
currently supported:
 
 * Dynamic partition related
 
-   The relevant parameters of dynamic partition are as follows:
-
-* `dynamic_partition.enable`: Used to specify whether the dynamic partition 
function at the table level is enabled. The default is true.
-* `dynamic_partition.time_unit:` is used to specify the time unit for 
dynamically adding partitions, which can be selected as DAY (day), WEEK (week), 
MONTH (month), YEAR (year), HOUR (hour).
-* `dynamic_partition.start`: Used to specify how many partitions to delete 
forward. The value must be less than 0. The default is Integer.MIN_VALUE.
-* `dynamic_partition.end`: Used to specify the number of partitions created in 
advance. The value must be greater than 0.
-* `dynamic_partition.prefix`: Used to specify the partition name prefix to be 
created. For example, if the partition name prefix is ​​p, the partition name 
will be automatically created as p20200108.
-* `dynamic_partition.buckets`: Used to specify the number of partition buckets 
that are automatically created.
-* `dynamic_partition.create_history_partition`: Whether to create a history 
partition.
-* `dynamic_partition.history_partition_num`: Specify the number of historical 
partitions to be created.
-* `dynamic_partition.reserved_history_periods`: Used to specify the range of 
reserved history periods.
+References related to dynamic partitioning[Data Partitioning-Dynamic 
partitioning](../../../../table-design/data-partition.md#Dynamic partitioning)
 
 * `file_cache_ttl_seconds`: 
 
diff --git a/versioned_docs/version-3.0/table-design/data-partition.md 
b/versioned_docs/version-3.0/table-design/data-partition.md
index bd31e0688a..3a31b36b26 100644
--- a/versioned_docs/version-3.0/table-design/data-partition.md
+++ b/versioned_docs/version-3.0/table-design/data-partition.md
@@ -462,6 +462,10 @@ The rules of dynamic partition are prefixed with 
`dynamic_partition.`:
 
   The starting offset of the dynamic partition, usually a negative number. 
Depending on the `time_unit` attribute, based on the current day (week / 
month), the partitions with a partition range before this offset will be 
deleted. If not filled, the default is `-2147483648`, that is, the history 
partition will not be  deleted.
 
+:::caution
+Note that if the user sets history_partition_num (> 0), the starting partition 
for creating dynamic partitions will use max(start, -history_partition_num), 
and when deleting historical partitions, the range up to start will still be 
retained, where start < 0.
+:::
+
 - `dynamic_partition.end`(required parameters)
 
   The end offset of the dynamic partition, usually a positive number. 
According to the difference of the `time_unit` attribute, the partition of the 
corresponding range is created in advance based on the current day (week / 
month).
@@ -489,6 +493,7 @@ The rules of dynamic partition are prefixed with 
`dynamic_partition.`:
 
   When `time_unit` is` MONTH`, this parameter is used to specify the start 
date of each month. The value ranges from 1 to 28. 1 means the 1st of every 
month, and 28 means the 28th of every month. The default is 1, which means that 
every month starts at 1st. The 29, 30 and 31 are not supported at the moment to 
avoid ambiguity caused by lunar years or months.
 
+- Doris supports multi-level storage with SSD and HDD tiers. For more details, 
please refer to [tiered-storage](./tiered-storage/diff-disk-medium-migration.md)
 
 - `dynamic_partition.create_history_partition`
 
@@ -500,31 +505,6 @@ The rules of dynamic partition are prefixed with 
`dynamic_partition.`:
 
   When `create_history_partition` is `true`, this parameter is used to specify 
the number of history partitions. The default value is -1, which means it is 
not set.
 
-- `dynamic_partition.hot_partition_num`
-
-  Specify how many of the latest partitions are hot partitions. For hot 
partition, the system will automatically set its `storage_medium` parameter to 
SSD, and set `storage_cooldown_time`.
-
-  :::tip
-
-  If there is no SSD disk path under the storage path, configuring this 
parameter will cause dynamic partition creation to fail.
-
-  :::
-
-  `hot_partition_num` is all partitions in the previous n days and in the 
future.
-
-  Let us give an example. Suppose today is 2021-05-20, partition by day, and 
the properties of dynamic partition are set to: hot_partition_num=2, end=3, 
start=-3. Then the system will automatically create the following partitions, 
and set the `storage_medium` and `storage_cooldown_time` properties:
-
-  ```sql
-  p20210517: ["2021-05-17", "2021-05-18") storage_medium=HDD 
storage_cooldown_time=9999-12-31 23:59:59
-  p20210518: ["2021-05-18", "2021-05-19") storage_medium=HDD 
storage_cooldown_time=9999-12-31 23:59:59
-  p20210519: ["2021-05-19", "2021-05-20") storage_medium=SSD 
storage_cooldown_time=2021-05-21 00:00:00
-  p20210520: ["2021-05-20", "2021-05-21") storage_medium=SSD 
storage_cooldown_time=2021-05-22 00:00:00
-  p20210521: ["2021-05-21", "2021-05-22") storage_medium=SSD 
storage_cooldown_time=2021-05-23 00:00:00
-  p20210522: ["2021-05-22", "2021-05-23") storage_medium=SSD 
storage_cooldown_time=2021-05-24 00:00:00
-  p20210523: ["2021-05-23", "2021-05-24") storage_medium=SSD 
storage_cooldown_time=2021-05-25 00:00:00
-  ```
-
-
 - `dynamic_partition.reserved_history_periods`
 
   The range of reserved history periods. It should be in the form of 
`[yyyy-MM-dd,yyyy-MM-dd],[...,...]` while the `dynamic_partition.time_unit` is 
"DAY, WEEK, MONTH and YEAR". And it should be in the form of `[yyyy-MM-dd 
HH:mm:ss,yyyy-MM-dd HH:mm:ss],[...,...]` while the dynamic_partition.time_unit` 
is "HOUR". And no more spaces expected. The default value is `"NULL"`, which 
means it is not set.
@@ -553,17 +533,6 @@ The rules of dynamic partition are prefixed with 
`dynamic_partition.`:
   Otherwise, every `[...,...]` in `reserved_history_periods` is a couple of 
properties, and they should be set at the same time. And the first date can't 
be larger than the second one.
 
 
-- `dynamic_partition.storage_medium`
-
-  
-  :::info Note
-  This parameteres is supported since Doris version 1.2.3
-  :::
-
-  Specifies the default storage medium for the created dynamic partition. HDD 
is the default, SSD can be selected.
-
-  Note that when set to SSD, the `hot_partition_num` property will no longer 
take effect, all partitions will default to SSD storage media and the cooldown 
time will be 9999-12-31 23:59:59.
-
 ### Create history partition rules
 
 When `create_history_partition` is `true`, i.e. history partition creation is 
enabled, Doris determines the number of history partitions to be created based 
on `dynamic_partition.start` and `dynamic_partition.history_partition_num`. 
diff --git 
a/versioned_docs/version-3.0/table-design/tiered-storage/diff-disk-medium-migration.md
 
b/versioned_docs/version-3.0/table-design/tiered-storage/diff-disk-medium-migration.md
new file mode 100644
index 0000000000..1201bba2b0
--- /dev/null
+++ 
b/versioned_docs/version-3.0/table-design/tiered-storage/diff-disk-medium-migration.md
@@ -0,0 +1,66 @@
+---
+{
+"title": "SSD and HDD tiered storage",
+"language": "en"
+}
+---
+
+<!-- 
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+It is possible to set parameters to create dynamic partitions on the 
corresponding disk types, while supporting data migration between (HDD, SSD) 
disk types based on the data's hot and cold characteristics, which can 
accelerate the read and write performance of Doris.
+
+- `dynamic_partition.hot_partition_num`
+
+  Doris supports data migration between different disk types (HDD, SSD) based 
on the cold/hot characteristics of the data, which can accelerate read and 
write performance. Users can set partition parameters to create dynamic 
partitions on the corresponding disk types.
+
+  For the 'dynamic_partition' parameter, please refer to 
[data-partition](../../table-design/data-partition.md#dynamic-partitioning)."
+
+
+
+  :::tip
+
+  If there is no SSD disk path under the storage path, configuring this 
parameter will cause dynamic partition creation to fail.
+
+  :::
+
+  `hot_partition_num` is all partitions in the previous n days and in the 
future.
+
+  Let us give an example. Suppose today is 2021-05-20, partition by day, and 
the properties of dynamic partition are set to: hot_partition_num=2, end=3, 
start=-3. Then the system will automatically create the following partitions, 
and set the `storage_medium` and `storage_cooldown_time` properties:
+
+  ```sql
+  p20210517: ["2021-05-17", "2021-05-18") storage_medium=HDD 
storage_cooldown_time=9999-12-31 23:59:59
+  p20210518: ["2021-05-18", "2021-05-19") storage_medium=HDD 
storage_cooldown_time=9999-12-31 23:59:59
+  p20210519: ["2021-05-19", "2021-05-20") storage_medium=SSD 
storage_cooldown_time=2021-05-21 00:00:00
+  p20210520: ["2021-05-20", "2021-05-21") storage_medium=SSD 
storage_cooldown_time=2021-05-22 00:00:00
+  p20210521: ["2021-05-21", "2021-05-22") storage_medium=SSD 
storage_cooldown_time=2021-05-23 00:00:00
+  p20210522: ["2021-05-22", "2021-05-23") storage_medium=SSD 
storage_cooldown_time=2021-05-24 00:00:00
+  p20210523: ["2021-05-23", "2021-05-24") storage_medium=SSD 
storage_cooldown_time=2021-05-25 00:00:00
+  ```
+
+- `dynamic_partition.storage_medium`
+
+  
+  :::info Note
+  This parameteres is supported since Doris version 1.2.3
+  :::
+
+  Specifies the default storage medium for the created dynamic partition. HDD 
is the default, SSD can be selected.
+
+  Note that when set to SSD, the `hot_partition_num` property will no longer 
take effect, all partitions will default to SSD storage media and the cooldown 
time will be 9999-12-31 23:59:59.
\ No newline at end of file
diff --git 
a/versioned_docs/version-3.0/table-design/tiered-storage/remote-storage.md 
b/versioned_docs/version-3.0/table-design/tiered-storage/remote-storage.md
new file mode 100644
index 0000000000..5c2645dccc
--- /dev/null
+++ b/versioned_docs/version-3.0/table-design/tiered-storage/remote-storage.md
@@ -0,0 +1,238 @@
+---
+{
+"title": "Remote Storage",
+"language": "en"
+}
+---
+
+<!-- 
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+## Use Case
+
+One significant use case in the future is similar to ES log storage, where 
data in the log scenario is split based on dates. Many of the data are cold 
data with infrequent queries, requiring a reduction in storage costs for such 
data. Considering cost-saving:
+
+- The pricing of regular cloud disks from various vendors is more expensive 
than object storage.
+
+- In actual online usage of the Doris Cluster, the utilization of regular 
cloud disks cannot reach 100%.
+
+- Cloud disks are not billed on demand, while object storage can be billed on 
demand.
+
+- Using regular cloud disks for high availability requires multiple replicas 
and replica migration in case of failures. In contrast, storing data on object 
storage eliminates these issues as it is shared.
+
+## Solution
+
+Set the freeze time at the partition level, which indicates how long a 
partition will be frozen, and define the location of remote storage for storing 
data after freezing. In the BE (Backend) daemon thread, the table's freeze 
condition is periodically checked. If a freeze condition is met, the data will 
be uploaded to object storage compatible with the S3 protocol and HDFS.
+
+Cold-hot tiering supports all Doris functionalities and only moves some data 
to object storage to save costs without sacrificing functionality. Therefore, 
it has the following characteristics:
+
+- Cold data is stored on object storage, and users do not need to worry about 
data consistency and security.
+
+- Flexible freeze strategy, where the cold remote storage property can be 
applied to both table and partition levels.
+
+- Users can query data without worrying about data distribution. If the data 
is not local, it will be pulled from the object storage and cached locally in 
the BE (Backend).
+
+- Replica clone optimization. If the stored data is on object storage, there 
is no need to fetch the stored data locally during replica cloning.
+
+- Remote object space recycling. If a table or partition is deleted or if 
space waste occurs during the cold-hot tiering process due to exceptional 
situations, a recycler thread will periodically recycle the space, saving 
storage resources.
+
+- Cache optimization, caching accessed cold data locally in the BE to achieve 
query performance similar to non-cold-hot tiering.
+
+- BE thread pool optimization, distinguishing between data sources from local 
and object storage to prevent delays in reading objects from impacting query 
performance.
+
+## Usage of Storage Policy
+
+The storage policy is the entry point for using the cold-hot tiering feature. 
Users only need to associate the storage policy with a table or partition 
during table creation or when using Doris.
+
+:::tip
+When creating an S3 resource, a remote S3 connection validation is performed 
to ensure the correct creation of the resource.
+:::
+
+Here is an example of creating an S3 resource:
+
+```Plain
+CREATE RESOURCE "remote_s3"
+PROPERTIES
+(
+    "type" = "s3",
+    "s3.endpoint" = "bj.s3.com",
+    "s3.region" = "bj",
+    "s3.bucket" = "test-bucket",
+    "s3.root.path" = "path/to/root",
+    "s3.access_key" = "bbb",
+    "s3.secret_key" = "aaaa",
+    "s3.connection.maximum" = "50",
+    "s3.connection.request.timeout" = "3000",
+    "s3.connection.timeout" = "1000"
+);
+
+CREATE STORAGE POLICY test_policy
+PROPERTIES(
+    "storage_resource" = "remote_s3",
+    "cooldown_ttl" = "1d"
+);
+
+CREATE TABLE IF NOT EXISTS create_table_use_created_policy 
+(
+    k1 BIGINT,
+    k2 LARGEINT,
+    v1 VARCHAR(2048)
+)
+UNIQUE KEY(k1)
+DISTRIBUTED BY HASH (k1) BUCKETS 3
+PROPERTIES(
+    "storage_policy" = "test_policy"
+);
+```
+
+And here is an example of creating an HDFS resource:
+
+```Plain
+CREATE RESOURCE "remote_hdfs" PROPERTIES (
+        "type"="hdfs",
+        "fs.defaultFS"="fs_host:default_fs_port",
+        "hadoop.username"="hive",
+        "hadoop.password"="hive",
+        "dfs.nameservices" = "my_ha",
+        "dfs.ha.namenodes.my_ha" = "my_namenode1, my_namenode2",
+        "dfs.namenode.rpc-address.my_ha.my_namenode1" = "nn1_host:rpc_port",
+        "dfs.namenode.rpc-address.my_ha.my_namenode2" = "nn2_host:rpc_port",
+        "dfs.client.failover.proxy.provider.my_ha" = 
"org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider"
+    );
+
+CREATE STORAGE POLICY test_policy PROPERTIES (
+    "storage_resource" = "remote_hdfs",
+    "cooldown_ttl" = "300"
+)
+
+CREATE TABLE IF NOT EXISTS create_table_use_created_policy (
+    k1 BIGINT,
+    k2 LARGEINTv1 VARCHAR(2048)
+)
+UNIQUE KEY(k1)
+DISTRIBUTED BY HASH (k1) BUCKETS 3
+PROPERTIES(
+    "storage_policy" = "test_policy"
+);
+```
+
+Associate a storage policy with an existing table by using the following 
command:
+
+```Plain
+ALTER TABLE create_table_not_have_policy SET ("storage_policy" = 
"test_policy");
+```
+
+Associate a storage policy with an existing partition by using the following 
command:
+
+```Plain
+ALTER TABLE create_table_partition MODIFY PARTITION (*) SET ("storage_policy" 
= "test_policy");
+```
+
+:::tip
+If you specify different storage policies for the entire table and some 
partitions during table creation, the storage policy set for the partitions 
will be ignored, and all partitions of the table will use the table's storage 
policy. If you want a specific partition to have a different storage policy 
than the others, you can use the method mentioned above to modify the 
association for that specific partition.
+
+For more details, please refer to the following documents in the Docs 
directory: 
[RESOURCE](../sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-RESOURCE),
 
[POLICY](../sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-POLICY),
 [CREATE 
TABLE](../sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-TABLE),
 [ALTER 
TABLE](../sql-manual/sql-statements/Data-Definition-Statements/Alter/ALTER-TABLE-COLUMN),
 which provide detailed explanations.
+:::
+
+### Limitations
+
+- A single table or partition can only be associated with one storage policy. 
Once associated, the storage policy cannot be dropped without first removing 
the association between them.
+
+- The object information associated with a storage policy does not support 
modifying the data storage path, such as bucket, endpoint, root_path, and other 
information.
+
+- Storage policies support creation, modification, and deletion. Before 
deleting a storage policy, ensure that no tables are referencing the storage 
policy.
+
+- When the Merge-on-Write feature is enabled, the Unique model does not 
support setting a storage policy.
+
+
+## Occupied Size of Cold Data Objects
+
+Method 1: You can use the `show proc '/backends'` command to view the size of 
each backend's uploaded objects. Look for the `RemoteUsedCapacity` field. 
Please note that this method may have some latency.
+
+Method 2: You can use the `show tablets from tableName` command to view the 
size of each tablet in a table, indicated by the `RemoteDataSize` field.
+
+## Cache for Cold Data
+
+As mentioned earlier, caching is introduced for cold data to optimize query 
performance and save object storage resources. When cold data is first accessed 
after cooling, Doris reloads the cooled data onto the local disk of the backend 
(BE). The cold data cache has the following characteristics:
+
+- The cache is stored on the BE's disk and does not occupy memory space.
+
+- The cache can be limited in size and uses LRU (Least Recently Used) for data 
eviction.
+
+- The implementation of the cache for cold data is the same as the cache for 
federated query catalog. Please refer to the documentation at 
[Filecache](../lakehouse/filecache) for more details.
+
+## Compaction of Cold Data
+
+The time at which cold data enters is counted from the moment the data rowset 
file is written to the local disk, plus the cooling duration. Since data is not 
written and cooled all at once, Doris performs compaction on cold data to avoid 
the issue of small files within object storage. However, the frequency and 
resource prioritization of cold data compaction are not very high. It is 
recommended to perform compaction on local hot data before cooling. You can 
adjust the following BE parameters:
+
+- The BE parameter `cold_data_compaction_thread_num` sets the concurrency for 
cold data compaction. The default value is 2.
+
+- The BE parameter `cold_data_compaction_interval_sec` sets the time interval 
for cold data compaction. The default value is 1800 seconds (30 minutes).
+
+## Schema Change for Cold Data
+
+The following schema change types are supported for cold data:
+
+- Adding or deleting columns
+
+- Modifying column types
+
+- Adjusting column order
+
+- Adding or modifying indexes
+
+## Garbage Collection of Cold Data
+
+Garbage data for cold data refers to data that is not used by any replica. The 
following situations may generate garbage data on object storage:
+
+1. Partial segment upload succeeds while the upload of the rowset fails.
+
+2. After the FE reselects the CooldownReplica, the rowset versions of the old 
and new CooldownReplica do not match. FollowerReplicas synchronize the 
CooldownMeta of the new CooldownReplica, and the rowsets with inconsistent 
versions in the old CooldownReplica become garbage data.
+
+3. After cold data compaction, the rowsets before merging cannot be 
immediately deleted because they may still be used by other replicas. However, 
eventually, all FollowerReplicas use the latest merged rowset, and the rowsets 
before merging become garbage data.
+
+Furthermore, the garbage data on objects is not immediately cleaned up. The BE 
parameter `remove_unused_remote_files_interval_sec` sets the time interval for 
garbage collection of cold data. The default value is 21600 seconds (6 hours).
+
+## TODOs
+
+- Some remote occupancy metrics may not have comprehensive update retrieval.
+
+## FAQs
+
+1. `ERROR 1105 (HY000): errCode = 2, detailMessage = Failed to create 
repository: connect to s3 failed: Unable to marshall request to JSON: host must 
not be null.`
+
+The S3 SDK defaults to using the virtual-hosted style. However, some object 
storage systems (e.g., MinIO) may not have virtual-hosted style access enabled 
or supported. In such cases, you can add the `use_path_style` parameter to 
force the use of path-style access:
+
+```Plain
+CREATE RESOURCE "remote_s3"
+PROPERTIES
+(
+    "type" = "s3",
+    "s3.endpoint" = "bj.s3.com",
+    "s3.region" = "bj",
+    "s3.bucket" = "test-bucket",
+    "s3.root.path" = "path/to/root",
+    "s3.access_key" = "bbb",
+    "s3.secret_key" = "aaaa",
+    "s3.connection.maximum" = "50",
+    "s3.connection.request.timeout" = "3000",
+    "s3.connection.timeout" = "1000",
+    "use_path_style" = "true"
+);
+```
\ No newline at end of file
diff --git a/versioned_sidebars/version-2.1-sidebars.json 
b/versioned_sidebars/version-2.1-sidebars.json
index 81750abf89..30d4d1c210 100644
--- a/versioned_sidebars/version-2.1-sidebars.json
+++ b/versioned_sidebars/version-2.1-sidebars.json
@@ -68,6 +68,14 @@
                         "table-design/index/ngram-bloomfilter-index"
                     ]
                 },
+                {
+                    "type": "category",
+                    "label": "Tiered Storage",
+                    "items": [
+                        
"table-design/tiered-storage/diff-disk-medium-migration",
+                        "table-design/tiered-storage/remote-storage"
+                    ]
+                },
                 {
                     "type": "category",
                     "label": "Data Models",
diff --git a/versioned_sidebars/version-3.0-sidebars.json 
b/versioned_sidebars/version-3.0-sidebars.json
index c69c543c2c..8042c850e1 100644
--- a/versioned_sidebars/version-3.0-sidebars.json
+++ b/versioned_sidebars/version-3.0-sidebars.json
@@ -99,6 +99,14 @@
                         "table-design/index/ngram-bloomfilter-index"
                     ]
                 },
+                {
+                    "type": "category",
+                    "label": "Tiered Storage",
+                    "items": [
+                        
"table-design/tiered-storage/diff-disk-medium-migration",
+                        "table-design/tiered-storage/remote-storage"
+                    ]
+                },
                 {
                     "type": "category",
                     "label": "Data Models",


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org
For additional commands, e-mail: commits-h...@doris.apache.org

Reply via email to