This is an automated email from the ASF dual-hosted git repository.

kassiez pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/doris-website.git


The following commit(s) were added to refs/heads/master by this push:
     new a9961b8af86 revise dead link for eng (#2130)
a9961b8af86 is described below

commit a9961b8af8601497e72667177bb07ac33fe1a8c0
Author: wyx123654 <[email protected]>
AuthorDate: Thu Feb 27 11:34:18 2025 +0800

    revise dead link for eng (#2130)
    
    ## Versions
    
    - [ ✅] dev
    - [ ✅] 3.0
    - [ ✅] 2.1
    - [ ] 2.0
    
    ## Languages
    
    - [ ] Chinese
    - [ ✅] English
    
    ## Docs Checklist
    
    - [ ] Checked by AI
    - [ ] Test Cases Built
---
 docs/admin-manual/auth/authorization/data.md                 |  4 ++--
 docs/admin-manual/auth/user-privilege.md                     |  6 +++---
 docs/admin-manual/config/fe-config-template.md               |  4 ++--
 docs/admin-manual/workload-management/compute-group.md       |  4 ++--
 docs/data-operate/import/file-format/csv.md                  |  4 ++--
 docs/data-operate/import/file-format/json.md                 |  4 ++--
 docs/data-operate/import/import-way/insert-into-manual.md    |  2 +-
 docs/ecosystem/hive-bitmap-udf.md                            |  2 +-
 docs/ecosystem/hive-hll-udf.md                               |  2 +-
 .../materialized-view/async-materialized-view/faq.md         |  2 +-
 .../materialized-view/async-materialized-view/use-guide.md   |  4 ++--
 docs/query-data/udf/java-user-defined-function.md            |  2 +-
 docs/table-design/overview.md                                |  2 +-
 docs/table-design/tiered-storage/remote-storage.md           |  2 +-
 .../version-2.1/admin-manual/auth/user-privilege.md          |  6 +++---
 versioned_docs/version-2.1/admin-manual/config/fe-config.md  |  4 ++--
 .../import/data-source/migrate-data-from-other-oltp.md       |  2 +-
 .../version-2.1/data-operate/import/file-format/csv.md       |  2 +-
 .../version-2.1/data-operate/import/file-format/json.md      |  2 +-
 .../data-operate/import/import-way/broker-load-manual.md     |  2 +-
 .../import/import-way/insert-into-values-manual.md           |  2 +-
 .../install/deploy-on-kubernetes/install-config-cluster.md   |  8 ++++----
 .../version-2.1/lakehouse/datalake-analytics/hudi.md         |  2 +-
 versioned_docs/version-2.1/lakehouse/external-statistics.md  |  2 +-
 versioned_docs/version-2.1/lakehouse/lakehouse-overview.md   |  2 +-
 .../materialized-view/async-materialized-view/faq.md         |  2 +-
 .../async-materialized-view/functions-and-demands.md         |  2 +-
 .../materialized-view/async-materialized-view/use-guide.md   |  4 ++--
 .../version-2.1/query-data/udf/java-user-defined-function.md |  2 +-
 .../table-design/data-partitioning/dynamic-partitioning.md   |  2 +-
 versioned_docs/version-2.1/table-design/data-type.md         | 12 ++++++------
 versioned_docs/version-2.1/table-design/overview.md          |  2 +-
 .../version-3.0/admin-manual/auth/user-privilege.md          |  6 +++---
 versioned_docs/version-3.0/admin-manual/config/be-config.md  |  2 +-
 versioned_docs/version-3.0/admin-manual/config/fe-config.md  |  4 ++--
 .../admin-manual/workload-management/compute-group.md        |  4 ++--
 .../compute-storage-decoupled/managing-compute-cluster.md    |  4 ++--
 .../import/data-source/migrate-data-from-other-oltp.md       |  2 +-
 .../version-3.0/data-operate/import/file-format/csv.md       |  2 +-
 .../version-3.0/data-operate/import/file-format/json.md      |  2 +-
 .../data-operate/import/import-way/broker-load-manual.md     |  2 +-
 .../import/import-way/insert-into-values-manual.md           |  2 +-
 .../integrated-storage-compute/install-config-cluster.md     |  6 +++---
 .../separating-storage-compute/config-cg.md                  |  6 +++---
 .../separating-storage-compute/config-fe.md                  |  6 +++---
 .../separating-storage-compute/config-ms.md                  |  4 ++--
 .../version-3.0/lakehouse/datalake-analytics/hudi.md         |  2 +-
 versioned_docs/version-3.0/lakehouse/external-statistics.md  |  2 +-
 versioned_docs/version-3.0/lakehouse/lakehouse-overview.md   |  2 +-
 .../materialized-view/async-materialized-view/faq.md         |  2 +-
 .../async-materialized-view/functions-and-demands.md         |  2 +-
 .../materialized-view/async-materialized-view/use-guide.md   |  4 ++--
 .../version-3.0/query-data/udf/java-user-defined-function.md |  2 +-
 .../table-design/data-partitioning/dynamic-partitioning.md   |  2 +-
 versioned_docs/version-3.0/table-design/data-type.md         | 12 ++++++------
 versioned_docs/version-3.0/table-design/overview.md          |  2 +-
 56 files changed, 94 insertions(+), 94 deletions(-)

diff --git a/docs/admin-manual/auth/authorization/data.md 
b/docs/admin-manual/auth/authorization/data.md
index 1056fc45741..cbfc49d29f4 100644
--- a/docs/admin-manual/auth/authorization/data.md
+++ b/docs/admin-manual/auth/authorization/data.md
@@ -37,8 +37,8 @@ Equivalent to automatically adding the predicate set in the 
Row Policy for users
 Row Policy cannot be set for default users root and admin.
 
 ### Related Commands
-- View Row Permission Policies [SHOW ROW 
POLICY](../../../sql-manual/sql-statements/Show-Statements/SHOW-POLICY.md)
-- Create Row Permission Policy [CREATE ROW 
POLICY](../../../sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-POLICY.md)
+- View Row Permission Policies [SHOW ROW 
POLICY](../../../sql-manual/sql-statements/data-governance/SHOW-ROW-POLICY)
+- Create Row Permission Policy [CREATE ROW 
POLICY](../../../sql-manual/sql-statements/data-governance/CREATE-ROW-POLICY)
 
 ### Row Permission Example
 1. Restrict the test user to only query data in table1 where c1='a'
diff --git a/docs/admin-manual/auth/user-privilege.md 
b/docs/admin-manual/auth/user-privilege.md
index 2fa9d7f4db7..e38e0770017 100644
--- a/docs/admin-manual/auth/user-privilege.md
+++ b/docs/admin-manual/auth/user-privilege.md
@@ -96,8 +96,8 @@ The default role cannot be deleted or assigned to others. 
When a user is deleted
 1. Create users: [CREATE 
USER](../../sql-manual/sql-statements/account-management/CREATE-USER)
 2. Alter users: [ALTER 
USER](../../sql-manual/sql-statements/account-management/ALTER-USER)
 3. Delete users: [DROP 
USER](../../sql-manual/sql-statements/account-management/DROP-USER)
-4. Authorization/Assign roles: 
[GRANT](../../sql-manual/sql-reference/Account-Management-Statements/GRANT.md)
-5. Withdrawal/REVOKE roles: 
[REVOKE](../../sql-manual/sql-reference/Account-Management-Statements/REVOKE.md)
+4. Authorization/Assign roles: 
[GRANT](../../sql-manual/sql-statements/account-management/GRANT-TO)
+5. Withdrawal/REVOKE roles: 
[REVOKE](../../sql-manual/sql-statements/account-management/REVOKE-FROM)
 6. Create role: [CREATE 
ROLE](../../sql-manual/sql-statements/account-management/CREATE-ROLE)
 7. Delete roles: [DROP 
ROLE](../../sql-manual/sql-statements/account-management/DROP-ROLE)
 8. View current user privileges: [SHOW 
GRANTS](../../sql-manual/sql-statements/account-management/SHOW-GRANTS)
@@ -293,4 +293,4 @@ Here are some usage scenarios of Doris privilege system.
 
 ## More help
 
-For more detailed syntax and best practices for permission management use, 
please refer to the 
[GRANTS](../../sql-manual/sql-reference/Account-Management-Statements/GRANT.md) 
command manual. Enter `HELP GRANTS` at the command line of the MySql client for 
more help information.
+For more detailed syntax and best practices for permission management use, 
please refer to the 
[GRANTS](../../sql-manual/sql-statements/account-management/GRANT-TO) command 
manual. Enter `HELP GRANTS` at the command line of the MySql client for more 
help information.
diff --git a/docs/admin-manual/config/fe-config-template.md 
b/docs/admin-manual/config/fe-config-template.md
index 5e1cdd3b5e7..ca943c2500d 100644
--- a/docs/admin-manual/config/fe-config-template.md
+++ b/docs/admin-manual/config/fe-config-template.md
@@ -48,7 +48,7 @@ There are two ways to view the configuration items of FE:
 
 2. View by command
 
-    After the FE is started, you can view the configuration items of the FE in 
the MySQL client with the following command,Concrete language law reference 
[ADMIN-SHOW-CONFIG](../../sql-manual/sql-reference/Database-Administration-Statements/ADMIN-SHOW-CONFIG.md):
+    After the FE is started, you can view the configuration items of the FE in 
the MySQL client with the following command,Concrete language law reference 
[ADMIN-SHOW-CONFIG](../../sql-manual/sql-statements/cluster-management/instance-management/SHOW-FRONTEND-CONFIG):
 
     `ADMIN SHOW FRONTEND CONFIG;`
 
@@ -1519,7 +1519,7 @@ For some high-frequency load work, such as: INSERT, 
STREAMING LOAD, ROUTINE_LOAD
 
 #### `label_clean_interval_second`
 
-Default:1 * 3600  (1 hour)
+Default: 1 * 3600  (1 hour)
 
 Load label cleaner will run every *label_clean_interval_second* to clean the 
outdated jobs.
 
diff --git a/docs/admin-manual/workload-management/compute-group.md 
b/docs/admin-manual/workload-management/compute-group.md
index 0e0d0247cdd..e31257384c8 100644
--- a/docs/admin-manual/workload-management/compute-group.md
+++ b/docs/admin-manual/workload-management/compute-group.md
@@ -62,8 +62,8 @@ SHOW COMPUTE GROUPS;
 
 ## Adding Compute Groups
 
-Managing compute groups requires `OPERATOR` privilege, which controls node 
management permissions. For more details, please refer to [Privilege 
Management](../sql-manual/sql-statements/Account-Management-Statements/GRANT.md).
 By default, only the root account has the `OPERATOR` privilege, but it can be 
granted to other accounts using the `GRANT` command.
-To add a BE and assign it to a compute group, use the [Add 
BE](../sql-manual/sql-statements/Cluster-Management-Statements/ALTER-SYSTEM-ADD-BACKEND.md)
 command. For example:
+Managing compute groups requires `OPERATOR` privilege, which controls node 
management permissions. For more details, please refer to [Privilege 
Management](../../sql-manual/sql-statements/account-management/GRANT-TO). By 
default, only the root account has the `OPERATOR` privilege, but it can be 
granted to other accounts using the `GRANT` command.
+To add a BE and assign it to a compute group, use the [Add 
BE](../../sql-manual/sql-statements/cluster-management/instance-management/ADD-BACKEND)
 command. For example:
 
 ```sql
 ALTER SYSTEM ADD BACKEND 'host:9050' PROPERTIES ("tag.compute_group_name" = 
"new_group");
diff --git a/docs/data-operate/import/file-format/csv.md 
b/docs/data-operate/import/file-format/csv.md
index 39d3b97dd7c..30f044579a5 100644
--- a/docs/data-operate/import/file-format/csv.md
+++ b/docs/data-operate/import/file-format/csv.md
@@ -34,8 +34,8 @@ Doris supports the following methods to load CSV format data:
 - [Broker Load](../import-way/broker-load-manual)
 - [Routine Load](../import-way/routine-load-manual)
 - [MySQL Load](../import-way/mysql-load-manual)
-- [INSERT INTO FROM S3 
TVF](../../sql-manual/sql-functions/table-valued-functions/s3)
-- [INSERT INTO FROM HDFS 
TVF](../../sql-manual/sql-functions/table-valued-functions/hdfs)
+- [INSERT INTO FROM S3 
TVF](../../../sql-manual/sql-functions/table-valued-functions/s3)
+- [INSERT INTO FROM HDFS 
TVF](../../../sql-manual/sql-functions/table-valued-functions/hdfs)
 
 ## Parameter Configuration
 
diff --git a/docs/data-operate/import/file-format/json.md 
b/docs/data-operate/import/file-format/json.md
index 3a05262a7d0..ec75d52b4f3 100644
--- a/docs/data-operate/import/file-format/json.md
+++ b/docs/data-operate/import/file-format/json.md
@@ -33,8 +33,8 @@ The following loading methods support JSON format data:
 - [Stream Load](../import-way/stream-load-manual.md)
 - [Broker Load](../import-way/broker-load-manual.md)
 - [Routine Load](../import-way/routine-load-manual.md)
-- [INSERT INTO FROM S3 
TVF](../../sql-manual/sql-functions/table-valued-functions/s3)
-- [INSERT INTO FROM HDFS 
TVF](../../sql-manual/sql-functions/table-valued-functions/hdfs)
+- [INSERT INTO FROM S3 
TVF](../../../sql-manual/sql-functions/table-valued-functions/s3)
+- [INSERT INTO FROM HDFS 
TVF](../../../sql-manual/sql-functions/table-valued-functions/hdfs)
 
 ## Supported JSON Formats
 
diff --git a/docs/data-operate/import/import-way/insert-into-manual.md 
b/docs/data-operate/import/import-way/insert-into-manual.md
index 40adf4c5ff5..7d8ed40c397 100644
--- a/docs/data-operate/import/import-way/insert-into-manual.md
+++ b/docs/data-operate/import/import-way/insert-into-manual.md
@@ -314,7 +314,7 @@ PROPERTIES (
 );
 ```
 
-2. For detailed instructions on creating Doris tables, please refer to [CREATE 
TABLE](../../../sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-TABLE/).
+2. For detailed instructions on creating Doris tables, please refer to [CREATE 
TABLE](../../../sql-manual/sql-statements/table-and-view/table/CREATE-TABLE).
 
 3. Importing data (from the `hive.db1.source_tbl` table into the `target_tbl` 
table).
 
diff --git a/docs/ecosystem/hive-bitmap-udf.md 
b/docs/ecosystem/hive-bitmap-udf.md
index cdeef5d92a2..59b09c4dc05 100644
--- a/docs/ecosystem/hive-bitmap-udf.md
+++ b/docs/ecosystem/hive-bitmap-udf.md
@@ -125,7 +125,7 @@ CREATE TABLE IF NOT EXISTS `test`.`hive_bitmap_table`(
 ) stored as textfile 
 ```
 
-2. [Creating a Catalog in Doris](../lakehouse/datalake-analytics/hive.md)
+2. [Creating a Catalog in Doris](../lakehouse/catalogs/hive-catalog)
 
 ```sql
 CREATE CATALOG hive PROPERTIES (
diff --git a/docs/ecosystem/hive-hll-udf.md b/docs/ecosystem/hive-hll-udf.md
index 9599a6a9b58..568cee2b14e 100644
--- a/docs/ecosystem/hive-hll-udf.md
+++ b/docs/ecosystem/hive-hll-udf.md
@@ -181,7 +181,7 @@ CREATE TABLE IF NOT EXISTS `hive_hll_table`(
 -- then reuse the previous steps to insert data from a normal table into it 
using the to_hll function
 ```
 
-2. [Create a Doris catalog](../lakehouse/datalake-analytics/hive.md)
+2. [Create a Doris catalog](../lakehouse/catalogs/hive-catalog)
 
 ```sql
 CREATE CATALOG hive PROPERTIES (
diff --git 
a/docs/query-acceleration/materialized-view/async-materialized-view/faq.md 
b/docs/query-acceleration/materialized-view/async-materialized-view/faq.md
index 80381a2c09e..9c6daf62386 100644
--- a/docs/query-acceleration/materialized-view/async-materialized-view/faq.md
+++ b/docs/query-acceleration/materialized-view/async-materialized-view/faq.md
@@ -88,7 +88,7 @@ Unable to find a suitable base table for partitioning
 
 This error typically indicates that the SQL definition of the materialized 
view and the choice of partitioning fields do not allow incremental partition 
updates, resulting in an error during the creation of the partitioned 
materialized view.
 
-- For incremental partition updates, the materialized view's SQL definition 
and partitioning field selection must meet specific requirements. See 
[Materialized View Refresh 
Modes](../../../sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-ASYNC-MATERIALIZED-VIEW#refreshmethod)
 for details.
+- For incremental partition updates, the materialized view's SQL definition 
and partitioning field selection must meet specific requirements. See 
[Materialized View Refresh 
Modes](../../../sql-manual/sql-statements/table-and-view/async-materialized-view/CREATE-ASYNC-MATERIALIZED-VIEW#optional-parameters)
 for details.
 
 - The latest code can indicate the reason for partition build failure, with 
error summaries and descriptions provided in Appendix 2.
 
diff --git 
a/docs/query-acceleration/materialized-view/async-materialized-view/use-guide.md
 
b/docs/query-acceleration/materialized-view/async-materialized-view/use-guide.md
index 8596daa615b..ea186edc788 100644
--- 
a/docs/query-acceleration/materialized-view/async-materialized-view/use-guide.md
+++ 
b/docs/query-acceleration/materialized-view/async-materialized-view/use-guide.md
@@ -52,7 +52,7 @@ When the following conditions are met, it is recommended to 
create partitioned m
 
 - The tables used by the materialized view, except for the partitioned table, 
do not change frequently.
 
-- The definition SQL of the materialized view and the partition field meet the 
requirements of partition derivation, that is, meet the requirements of 
partition incremental update. Detailed requirements can be found in 
[CREATE-ASYNC-MATERIALIZED-VIEW](../../../sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-ASYNC-MATERIALIZED-VIEW/#refreshmethod)
+- The definition SQL of the materialized view and the partition field meet the 
requirements of partition derivation, that is, meet the requirements of 
partition incremental update. Detailed requirements can be found in 
[CREATE-ASYNC-MATERIALIZED-VIEW](../../../sql-manual/sql-statements/table-and-view/async-materialized-view/CREATE-ASYNC-MATERIALIZED-VIEW#optional-parameters)
 
 - The number of partitions in the materialized view is not large, as too many 
partitions will lead to excessively long partition materialized view 
construction time.
 
@@ -62,7 +62,7 @@ If partitioned materialized views cannot be constructed, you 
can consider choosi
 
 ## Common Usage of Partitioned Materialized Views
 
-When the materialized view's base table data volume is large and the base 
table is a partitioned table, if the materialized view's definition SQL and 
partition fields meet the requirements of partition derivation, this scenario 
is suitable for building partitioned materialized views. For detailed 
requirements of partition derivation, refer to 
[CREATE-ASYNC-MATERIALIZED-VIEW](../../../sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-ASYNC-MATERIALIZED-VIEW/#refreshmethod
 [...]
+When the materialized view's base table data volume is large and the base 
table is a partitioned table, if the materialized view's definition SQL and 
partition fields meet the requirements of partition derivation, this scenario 
is suitable for building partitioned materialized views. For detailed 
requirements of partition derivation, refer to 
[CREATE-ASYNC-MATERIALIZED-VIEW](../../../sql-manual/sql-statements/table-and-view/async-materialized-view/CREATE-ASYNC-MATERIALIZED-VIEW#optional-
 [...]
 
 The materialized view's partitions are created following the base table's 
partition mapping, generally having a 1:1 or 1:n relationship with the base 
table's partitions.
 
diff --git a/docs/query-data/udf/java-user-defined-function.md 
b/docs/query-data/udf/java-user-defined-function.md
index 96931bb2621..7250e717835 100644
--- a/docs/query-data/udf/java-user-defined-function.md
+++ b/docs/query-data/udf/java-user-defined-function.md
@@ -377,7 +377,7 @@ UDTF is supported starting from Doris version 3.0.
     }
     ```
 
-2. Register and create the Java-UDTF function in Doris. Two UDTF functions 
will be registered. Table functions in Doris may exhibit different behaviors 
due to the `_outer` suffix. For more details, refer to [OUTER 
combinator](../../sql-manual/sql-functions/table-functions/explode-numbers-outer.md).
+2. Register and create the Java-UDTF function in Doris. Two UDTF functions 
will be registered. Table functions in Doris may exhibit different behaviors 
due to the `_outer` suffix. For more details, refer to [OUTER 
combinator](../../sql-manual/sql-functions/table-functions/explode-numbers).
 For more syntax details, please refer to [CREATE 
FUNCTION](../../sql-manual/sql-statements/function/CREATE-FUNCTION).
 
     ```sql
diff --git a/docs/table-design/overview.md b/docs/table-design/overview.md
index 6e5fa917c66..798bd42cd05 100644
--- a/docs/table-design/overview.md
+++ b/docs/table-design/overview.md
@@ -34,7 +34,7 @@ In Doris, table names are case-sensitive by default. You can 
configure [lower_ca
 
 ## Table property
 
-In Doris, the CREATE TABLE statement can specify [table 
properties](../sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-TABLE.md#properties),
 including:
+In Doris, the CREATE TABLE statement can specify [table 
properties](../sql-manual/sql-statements/table-and-view/table/CREATE-TABLE#properties),
 including:
 
 - **buckets**: Determines the distribution of data within the table.
 
diff --git a/docs/table-design/tiered-storage/remote-storage.md 
b/docs/table-design/tiered-storage/remote-storage.md
index 0e7bc9f65ae..4b3e2d238e5 100644
--- a/docs/table-design/tiered-storage/remote-storage.md
+++ b/docs/table-design/tiered-storage/remote-storage.md
@@ -158,7 +158,7 @@ ALTER TABLE create_table_partition MODIFY PARTITION (*) 
SET("storage_policy"="te
 :::tip
 Note that if the user specifies different Storage Policies for the entire 
Table and some Partitions when creating the table, the Storage Policy set for 
the Partition will be ignored, and all Partitions of the table will use the 
table's Policy. If you need a Partition's Policy to differ from others, you can 
modify it using the method described above for associating a Storage Policy 
with an existing Partition.
 
-For more details, please refer to the Docs directory under 
[RESOURCE](../../sql-manual/sql-statements/cluster-management/compute-management/CREATE-RESOURCE),
 
[POLICY](../../sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-POLICY),
 [CREATE 
TABLE](../../sql-manual/sql-statements/table-and-view/table/CREATE-TABLE), 
[ALTER 
TABLE](../../sql-manual/sql-statements/table-and-view/table/ALTER-TABLE-COLUMN),
 etc.
+For more details, please refer to the Docs directory under 
[RESOURCE](../../sql-manual/sql-statements/cluster-management/compute-management/CREATE-RESOURCE),
 
[POLICY](../../sql-manual/sql-statements/cluster-management/storage-management/CREATE-STORAGE-POLICY),
 [CREATE 
TABLE](../../sql-manual/sql-statements/table-and-view/table/CREATE-TABLE), 
[ALTER 
TABLE](../../sql-manual/sql-statements/table-and-view/table/ALTER-TABLE-COLUMN),
 etc.
 :::
 
 ### Configuring Compaction
diff --git a/versioned_docs/version-2.1/admin-manual/auth/user-privilege.md 
b/versioned_docs/version-2.1/admin-manual/auth/user-privilege.md
index 2fa9d7f4db7..e38e0770017 100644
--- a/versioned_docs/version-2.1/admin-manual/auth/user-privilege.md
+++ b/versioned_docs/version-2.1/admin-manual/auth/user-privilege.md
@@ -96,8 +96,8 @@ The default role cannot be deleted or assigned to others. 
When a user is deleted
 1. Create users: [CREATE 
USER](../../sql-manual/sql-statements/account-management/CREATE-USER)
 2. Alter users: [ALTER 
USER](../../sql-manual/sql-statements/account-management/ALTER-USER)
 3. Delete users: [DROP 
USER](../../sql-manual/sql-statements/account-management/DROP-USER)
-4. Authorization/Assign roles: 
[GRANT](../../sql-manual/sql-reference/Account-Management-Statements/GRANT.md)
-5. Withdrawal/REVOKE roles: 
[REVOKE](../../sql-manual/sql-reference/Account-Management-Statements/REVOKE.md)
+4. Authorization/Assign roles: 
[GRANT](../../sql-manual/sql-statements/account-management/GRANT-TO)
+5. Withdrawal/REVOKE roles: 
[REVOKE](../../sql-manual/sql-statements/account-management/REVOKE-FROM)
 6. Create role: [CREATE 
ROLE](../../sql-manual/sql-statements/account-management/CREATE-ROLE)
 7. Delete roles: [DROP 
ROLE](../../sql-manual/sql-statements/account-management/DROP-ROLE)
 8. View current user privileges: [SHOW 
GRANTS](../../sql-manual/sql-statements/account-management/SHOW-GRANTS)
@@ -293,4 +293,4 @@ Here are some usage scenarios of Doris privilege system.
 
 ## More help
 
-For more detailed syntax and best practices for permission management use, 
please refer to the 
[GRANTS](../../sql-manual/sql-reference/Account-Management-Statements/GRANT.md) 
command manual. Enter `HELP GRANTS` at the command line of the MySql client for 
more help information.
+For more detailed syntax and best practices for permission management use, 
please refer to the 
[GRANTS](../../sql-manual/sql-statements/account-management/GRANT-TO) command 
manual. Enter `HELP GRANTS` at the command line of the MySql client for more 
help information.
diff --git a/versioned_docs/version-2.1/admin-manual/config/fe-config.md 
b/versioned_docs/version-2.1/admin-manual/config/fe-config.md
index 6c94ffd0249..641a3714200 100644
--- a/versioned_docs/version-2.1/admin-manual/config/fe-config.md
+++ b/versioned_docs/version-2.1/admin-manual/config/fe-config.md
@@ -48,7 +48,7 @@ There are two ways to view the configuration items of FE:
 
 2. View by command
 
-    After the FE is started, you can view the configuration items of the FE in 
the MySQL client with the following command,Concrete language law reference 
[SHOW-CONFIG](../../sql-manual/sql-statements/cluster-management/instance-management/SHOW-CONFIG.md):
+    After the FE is started, you can view the configuration items of the FE in 
the MySQL client with the following command,Concrete language law reference 
[SHOW-CONFIG](../../sql-manual/sql-statements/cluster-management/instance-management/SHOW-FRONTEND-CONFIG):
 
     `SHOW FRONTEND CONFIG;`
 
@@ -1512,7 +1512,7 @@ For some high-frequency load work, such as: INSERT, 
STREAMING LOAD, ROUTINE_LOAD
 
 #### `label_clean_interval_second`
 
-Default:1 * 3600  (1 hour)
+Default: 1 * 3600  (1 hour)
 
 Load label cleaner will run every *label_clean_interval_second* to clean the 
outdated jobs.
 
diff --git 
a/versioned_docs/version-2.1/data-operate/import/data-source/migrate-data-from-other-oltp.md
 
b/versioned_docs/version-2.1/data-operate/import/data-source/migrate-data-from-other-oltp.md
index 1c728c31f27..4450603cdc9 100644
--- 
a/versioned_docs/version-2.1/data-operate/import/data-source/migrate-data-from-other-oltp.md
+++ 
b/versioned_docs/version-2.1/data-operate/import/data-source/migrate-data-from-other-oltp.md
@@ -53,7 +53,7 @@ AS
 SELECT * FROM iceberg_catalog.iceberg_db.table1;
 ```
 
-For more details, refer to [Catalog Data 
Load](../../../lakehouse/catalog-overview.md#data-import)。
+For more details, refer to [Catalog Data 
Load](../../../data-operate/import/import-way/insert-into-manual)。
 
 ## Flink Doris Connector
 
diff --git a/versioned_docs/version-2.1/data-operate/import/file-format/csv.md 
b/versioned_docs/version-2.1/data-operate/import/file-format/csv.md
index 39d3b97dd7c..4ae430ab778 100644
--- a/versioned_docs/version-2.1/data-operate/import/file-format/csv.md
+++ b/versioned_docs/version-2.1/data-operate/import/file-format/csv.md
@@ -35,7 +35,7 @@ Doris supports the following methods to load CSV format data:
 - [Routine Load](../import-way/routine-load-manual)
 - [MySQL Load](../import-way/mysql-load-manual)
 - [INSERT INTO FROM S3 
TVF](../../sql-manual/sql-functions/table-valued-functions/s3)
-- [INSERT INTO FROM HDFS 
TVF](../../sql-manual/sql-functions/table-valued-functions/hdfs)
+- [INSERT INTO FROM HDFS 
TVF](../../../sql-manual/sql-functions/table-valued-functions/hdfs)
 
 ## Parameter Configuration
 
diff --git a/versioned_docs/version-2.1/data-operate/import/file-format/json.md 
b/versioned_docs/version-2.1/data-operate/import/file-format/json.md
index 3a05262a7d0..a184d2f1222 100644
--- a/versioned_docs/version-2.1/data-operate/import/file-format/json.md
+++ b/versioned_docs/version-2.1/data-operate/import/file-format/json.md
@@ -34,7 +34,7 @@ The following loading methods support JSON format data:
 - [Broker Load](../import-way/broker-load-manual.md)
 - [Routine Load](../import-way/routine-load-manual.md)
 - [INSERT INTO FROM S3 
TVF](../../sql-manual/sql-functions/table-valued-functions/s3)
-- [INSERT INTO FROM HDFS 
TVF](../../sql-manual/sql-functions/table-valued-functions/hdfs)
+- [INSERT INTO FROM HDFS 
TVF](../../../sql-manual/sql-functions/table-valued-functions/hdfs)
 
 ## Supported JSON Formats
 
diff --git 
a/versioned_docs/version-2.1/data-operate/import/import-way/broker-load-manual.md
 
b/versioned_docs/version-2.1/data-operate/import/import-way/broker-load-manual.md
index 42943e010ca..03f2dc7b929 100644
--- 
a/versioned_docs/version-2.1/data-operate/import/import-way/broker-load-manual.md
+++ 
b/versioned_docs/version-2.1/data-operate/import/import-way/broker-load-manual.md
@@ -28,7 +28,7 @@ Broker Load is initiated from the MySQL API. Doris will 
actively pull the data f
 
 Broker Load is suitable for scenarios where the source data is stored in 
remote storage systems, such as HDFS, and the data volume is relatively large.
 
-Direct reads from HDFS or S3 can also be imported through HDFS TVF or S3 TVF 
in the [Lakehouse/TVF](../../../lakehouse/file-analysis). The current "Insert 
Into" based on TVF is a synchronous import, while Broker Load is an 
asynchronous import method.
+Direct reads from HDFS or S3 can also be imported through HDFS TVF or S3 TVF 
in the [Lakehouse/TVF](../../../data-operate/import/file-format/csv#TVF Load). 
The current "Insert Into" based on TVF is a synchronous import, while Broker 
Load is an asynchronous import method.
 
 In early versions of Doris, both S3 Load and HDFS Load were implemented by 
connecting to specific Broker processes using `WITH BROKER`.
 In newer versions, S3 Load and HDFS Load have been optimized as the most 
commonly used import methods, and they no longer depend on an additional Broker 
process, though they still use syntax similar to Broker Load.
diff --git 
a/versioned_docs/version-2.1/data-operate/import/import-way/insert-into-values-manual.md
 
b/versioned_docs/version-2.1/data-operate/import/import-way/insert-into-values-manual.md
index efbd3cfa017..8abddb769e1 100644
--- 
a/versioned_docs/version-2.1/data-operate/import/import-way/insert-into-values-manual.md
+++ 
b/versioned_docs/version-2.1/data-operate/import/import-way/insert-into-values-manual.md
@@ -221,7 +221,7 @@ Query OK, 5 rows affected (0.04 sec)
 
 The invisible state of data is temporary, and the data will eventually become 
visible. 
 
-You can check the visibility status of a batch of data using the [SHOW 
TRANSACTION](../../../sql-manual/sql-statements/transaction/SHOW-TRANSACTION.md)
 statement.
+You can check the visibility status of a batch of data using the [SHOW 
TRANSACTION](../../../sql-manual/sql-statements/transaction/SHOW-TRANSACTION) 
statement.
 
 If the `TransactionStatus` column in the result is `visible`, it indicates 
that the data is visible.
 
diff --git 
a/versioned_docs/version-2.1/install/deploy-on-kubernetes/install-config-cluster.md
 
b/versioned_docs/version-2.1/install/deploy-on-kubernetes/install-config-cluster.md
index c2a5a26d80f..2e57254eb3d 100644
--- 
a/versioned_docs/version-2.1/install/deploy-on-kubernetes/install-config-cluster.md
+++ 
b/versioned_docs/version-2.1/install/deploy-on-kubernetes/install-config-cluster.md
@@ -517,7 +517,7 @@ mysql -h 
ac4828493dgrftb884g67wg4tb68gyut-1137856348.us-east-1.elb.amazonaws.com
 ```
 
 ## Configuring the username and password for the management cluster
-Managing Doris nodes requires connecting to the live FE nodes via the MySQL 
protocol using a username and password for administrative operations. Doris 
implements [a permission management mechanism similar to 
RBAC](../../admin-manual/auth/authentication-and-authorization?_highlight=rbac),
 where the user must have the 
[Node_priv](../../admin-manual/auth/authentication-and-authorization.md#types-of-permissions)
 permission to perform node management. By default, the Doris Operator deploys 
t [...]
+Managing Doris nodes requires connecting to the live FE nodes via the MySQL 
protocol using a username and password for administrative operations. Doris 
implements [a permission management mechanism similar to RBAC]( 
../../admin-manual/auth/authentication-and-authorization), where the user must 
have the 
[Node_priv](../../admin-manual/auth/authentication-and-authorization#Types of 
Permissions) permission to perform node management. By default, the Doris 
Operator deploys the cluster with th [...]
 
 The process of configuring the username and password can be divided into three 
scenarios:  
 - initializing the root user password during cluster deployment;
@@ -529,7 +529,7 @@ To secure access, you must configure a username and 
password with Node_Priv perm
 - Using a Kubernetes Secret
 
 ### Configuring the root user password during cluster deployment
-To set the root user's password securely, Doris supports encrypting it in 
[`fe.conf`](../../admin-manual/config/fe-config?_highlight=initial_#initial_root_password)
 using a two-stage SHA-1 encryption process. Here's how to set up the password.
+To set the root user's password securely, Doris supports encrypting it in 
[`fe.conf`](../../admin-manual/config/fe-config#initial_root_password) using a 
two-stage SHA-1 encryption process. Here's how to set up the password.
 
 #### Step 1: Generate the root encrypted password
 
@@ -664,7 +664,7 @@ After deployment, please set the root password. Doris 
Operator will switch to us
 :::
 
 ### Setting the root user password after cluster deployment
-After deploying the Doris cluster and setting the root user's password, it's 
essential to create a management user with the necessary 
[Node_priv](../../admin-manual/auth/authentication-and-authorization.md#types-of-permissions)
 permission to allow Doris Operator to automatically manage the cluster nodes. 
Using the root user for this purpose is not recommended. Instead, please refer 
to [the User Creation and Permission Assignment 
Section](../../sql-manual/sql-statements/account-management [...]
+After deploying the Doris cluster and setting the root user's password, it's 
essential to create a management user with the necessary 
[Node_priv](../../admin-manual/auth/authentication-and-authorization#Types of 
Permissions) permission to allow Doris Operator to automatically manage the 
cluster nodes. Using the root user for this purpose is not recommended. 
Instead, please refer to [the User Creation and Permission Assignment 
Section](../../sql-manual/sql-statements/account-management/CR [...]
 
 #### Step 1: Create a user with Node_priv permission
 First, connect to the Doris database using the MySQL protocol, then create a 
new user with the required permissions:
@@ -686,7 +686,7 @@ For more details on creating users, setting passwords, and 
granting permissions,
 #### step 3: Configure DorisCluster  
 - Using environment variables
 
-  Directly configure the new user’s name and password in the DorisCluster 
resource:
+  Directly configure the new user's name and password in the DorisCluster 
resource:
   ```yaml
   spec:
     adminUser:
diff --git a/versioned_docs/version-2.1/lakehouse/datalake-analytics/hudi.md 
b/versioned_docs/version-2.1/lakehouse/datalake-analytics/hudi.md
index d3f91cb0418..3a9ec7c12c7 100644
--- a/versioned_docs/version-2.1/lakehouse/datalake-analytics/hudi.md
+++ b/versioned_docs/version-2.1/lakehouse/datalake-analytics/hudi.md
@@ -68,7 +68,7 @@ Same as that in Hive Catalogs. See the relevant section in 
[Hive](./hive.md).
 Spark will create the read optimize table with `_ro` suffix when generating 
hudi mor table. Doris will skip the log files when reading optimize table. 
Doris does not determine whether a table is read optimize by the `_ro` suffix 
instead of the hive inputformat. Users can observe whether the inputformat of 
the 'cow/mor/read optimize' table is the same through the `SHOW CREATE TABLE` 
command. In addition, Doris supports adding hoodie related configurations to 
catalog properties, which are  [...]
 
 ## Query Optimization
-Doris uses the parquet native reader to read the data files of the COW table, 
and uses the Java SDK (By calling hudi-bundle through JNI) to read the data 
files of the MOR table. In `upsert` scenario, there may still remains base 
files that have not been updated in the MOR table, which can be read through 
the parquet native reader. Users can view the execution plan of hudi scan 
through the [explain](../../query/query-analysis/query-analysis.md) command, 
where `hudiNativeReadSplits` indica [...]
+Doris uses the parquet native reader to read the data files of the COW table, 
and uses the Java SDK (By calling hudi-bundle through JNI) to read the data 
files of the MOR table. In `upsert` scenario, there may still remains base 
files that have not been updated in the MOR table, which can be read through 
the parquet native reader. Users can view the execution plan of hudi scan 
through the [explain](../../sql-manual/sql-statements/data-query/EXPLAIN) 
command, where `hudiNativeReadSplits`  [...]
 ```
 |0:VHUDI_SCAN_NODE                                                             
|
 |      table: minbatch_mor_rt                                                  
|
diff --git a/versioned_docs/version-2.1/lakehouse/external-statistics.md 
b/versioned_docs/version-2.1/lakehouse/external-statistics.md
index e4732114dd6..8c5754afe37 100644
--- a/versioned_docs/version-2.1/lakehouse/external-statistics.md
+++ b/versioned_docs/version-2.1/lakehouse/external-statistics.md
@@ -26,7 +26,7 @@ under the License.
 
 # External Table Statistics
 
-The collection method and content of external table statistical information 
are basically the same as those of internal tables. For detailed information, 
please refer to the [statistical information](../query/nereids/statistics.md). 
After version 2.0.3, Hive tables support automatic and sampling collection.
+The collection method and content of external table statistical information 
are basically the same as those of internal tables. For detailed information, 
please refer to the [statistical 
information](../admin-manual/system-tables/information_schema/statistics). 
After version 2.0.3, Hive tables support automatic and sampling collection.
 
 # Note
 
diff --git a/versioned_docs/version-2.1/lakehouse/lakehouse-overview.md 
b/versioned_docs/version-2.1/lakehouse/lakehouse-overview.md
index c2939265a20..4b51b278aae 100644
--- a/versioned_docs/version-2.1/lakehouse/lakehouse-overview.md
+++ b/versioned_docs/version-2.1/lakehouse/lakehouse-overview.md
@@ -236,7 +236,7 @@ mysql> SHOW DATABASES;
 +-----------+
 ```
 
-Syntax help: [SWITCH](../sql-manual/sql-statements/session/context/SWITCH/)
+Syntax help: 
[SWITCH](../sql-manual/sql-statements/session/context/SWITCH-CATALOG)
 
 5. Use the Catalog
 
diff --git 
a/versioned_docs/version-2.1/query-acceleration/materialized-view/async-materialized-view/faq.md
 
b/versioned_docs/version-2.1/query-acceleration/materialized-view/async-materialized-view/faq.md
index 80381a2c09e..cb622bb7ed6 100644
--- 
a/versioned_docs/version-2.1/query-acceleration/materialized-view/async-materialized-view/faq.md
+++ 
b/versioned_docs/version-2.1/query-acceleration/materialized-view/async-materialized-view/faq.md
@@ -88,7 +88,7 @@ Unable to find a suitable base table for partitioning
 
 This error typically indicates that the SQL definition of the materialized 
view and the choice of partitioning fields do not allow incremental partition 
updates, resulting in an error during the creation of the partitioned 
materialized view.
 
-- For incremental partition updates, the materialized view's SQL definition 
and partitioning field selection must meet specific requirements. See 
[Materialized View Refresh 
Modes](../../../sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-ASYNC-MATERIALIZED-VIEW#refreshmethod)
 for details.
+- For incremental partition updates, the materialized view's SQL definition 
and partitioning field selection must meet specific requirements. See 
[Materialized View Refresh 
Modes](../../../sql-manual/sql-statements/table-and-view/async-materialized-view/CREATE-ASYNC-MATERIALIZED-VIEW#Optional
 Parameters) for details.
 
 - The latest code can indicate the reason for partition build failure, with 
error summaries and descriptions provided in Appendix 2.
 
diff --git 
a/versioned_docs/version-2.1/query-acceleration/materialized-view/async-materialized-view/functions-and-demands.md
 
b/versioned_docs/version-2.1/query-acceleration/materialized-view/async-materialized-view/functions-and-demands.md
index 153410d1730..716b6ed25c5 100644
--- 
a/versioned_docs/version-2.1/query-acceleration/materialized-view/async-materialized-view/functions-and-demands.md
+++ 
b/versioned_docs/version-2.1/query-acceleration/materialized-view/async-materialized-view/functions-and-demands.md
@@ -1241,7 +1241,7 @@ NeedRefreshPartitions: 
["p_20231023_20231024","p_20231019_20231020","p_20231020_
 - If set to 10, allows up to 10 seconds of delay between materialized view and 
base table data. The materialized view can be used for transparent rewriting 
during this 10-second window.
 :::
 
-For more details, see 
[TASKS](../../../sql-manual/sql-functions/table-valued-functions/tasks?_highlight=task)
+For more details, see 
[TASKS](../../../sql-manual/sql-functions/table-valued-functions/tasks)
 
 ### Querying Materialized View Jobs
 
diff --git 
a/versioned_docs/version-2.1/query-acceleration/materialized-view/async-materialized-view/use-guide.md
 
b/versioned_docs/version-2.1/query-acceleration/materialized-view/async-materialized-view/use-guide.md
index 8596daa615b..077f4df37ee 100644
--- 
a/versioned_docs/version-2.1/query-acceleration/materialized-view/async-materialized-view/use-guide.md
+++ 
b/versioned_docs/version-2.1/query-acceleration/materialized-view/async-materialized-view/use-guide.md
@@ -52,7 +52,7 @@ When the following conditions are met, it is recommended to 
create partitioned m
 
 - The tables used by the materialized view, except for the partitioned table, 
do not change frequently.
 
-- The definition SQL of the materialized view and the partition field meet the 
requirements of partition derivation, that is, meet the requirements of 
partition incremental update. Detailed requirements can be found in 
[CREATE-ASYNC-MATERIALIZED-VIEW](../../../sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-ASYNC-MATERIALIZED-VIEW/#refreshmethod)
+- The definition SQL of the materialized view and the partition field meet the 
requirements of partition derivation, that is, meet the requirements of 
partition incremental update. Detailed requirements can be found in 
[CREATE-ASYNC-MATERIALIZED-VIEW](../../../sql-manual/sql-statements/table-and-view/async-materialized-view/CREATE-ASYNC-MATERIALIZED-VIEW#Optional
 Parameters)
 
 - The number of partitions in the materialized view is not large, as too many 
partitions will lead to excessively long partition materialized view 
construction time.
 
@@ -62,7 +62,7 @@ If partitioned materialized views cannot be constructed, you 
can consider choosi
 
 ## Common Usage of Partitioned Materialized Views
 
-When the materialized view's base table data volume is large and the base 
table is a partitioned table, if the materialized view's definition SQL and 
partition fields meet the requirements of partition derivation, this scenario 
is suitable for building partitioned materialized views. For detailed 
requirements of partition derivation, refer to 
[CREATE-ASYNC-MATERIALIZED-VIEW](../../../sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-ASYNC-MATERIALIZED-VIEW/#refreshmethod
 [...]
+When the materialized view's base table data volume is large and the base 
table is a partitioned table, if the materialized view's definition SQL and 
partition fields meet the requirements of partition derivation, this scenario 
is suitable for building partitioned materialized views. For detailed 
requirements of partition derivation, refer to 
[CREATE-ASYNC-MATERIALIZED-VIEW](../../../sql-manual/sql-statements/table-and-view/async-materialized-view/CREATE-ASYNC-MATERIALIZED-VIEW#Optional
  [...]
 
 The materialized view's partitions are created following the base table's 
partition mapping, generally having a 1:1 or 1:n relationship with the base 
table's partitions.
 
diff --git 
a/versioned_docs/version-2.1/query-data/udf/java-user-defined-function.md 
b/versioned_docs/version-2.1/query-data/udf/java-user-defined-function.md
index 96931bb2621..7250e717835 100644
--- a/versioned_docs/version-2.1/query-data/udf/java-user-defined-function.md
+++ b/versioned_docs/version-2.1/query-data/udf/java-user-defined-function.md
@@ -377,7 +377,7 @@ UDTF is supported starting from Doris version 3.0.
     }
     ```
 
-2. Register and create the Java-UDTF function in Doris. Two UDTF functions 
will be registered. Table functions in Doris may exhibit different behaviors 
due to the `_outer` suffix. For more details, refer to [OUTER 
combinator](../../sql-manual/sql-functions/table-functions/explode-numbers-outer.md).
+2. Register and create the Java-UDTF function in Doris. Two UDTF functions 
will be registered. Table functions in Doris may exhibit different behaviors 
due to the `_outer` suffix. For more details, refer to [OUTER 
combinator](../../sql-manual/sql-functions/table-functions/explode-numbers).
 For more syntax details, please refer to [CREATE 
FUNCTION](../../sql-manual/sql-statements/function/CREATE-FUNCTION).
 
     ```sql
diff --git 
a/versioned_docs/version-2.1/table-design/data-partitioning/dynamic-partitioning.md
 
b/versioned_docs/version-2.1/table-design/data-partitioning/dynamic-partitioning.md
index 307d713e92b..f9a6e9e6e08 100644
--- 
a/versioned_docs/version-2.1/table-design/data-partitioning/dynamic-partitioning.md
+++ 
b/versioned_docs/version-2.1/table-design/data-partitioning/dynamic-partitioning.md
@@ -107,7 +107,7 @@ ALTER TABLE test_partition SET (
 
 ### 查看动态分区调度情况
 
-通过 
[SHOW-DYNAMIC-PARTITION](../../sql-manual/sql-statements/Show-Statements/SHOW-DYNAMIC-PARTITION)
 可以查看当前数据库下,所有动态分区表的调度情况:
+通过 
[SHOW-DYNAMIC-PARTITION](../../sql-manual/sql-statements/table-and-view/table/SHOW-DYNAMIC-PARTITION-TABLES)
 可以查看当前数据库下,所有动态分区表的调度情况:
 
 ```sql
 SHOW DYNAMIC PARTITION TABLES;
diff --git a/versioned_docs/version-2.1/table-design/data-type.md 
b/versioned_docs/version-2.1/table-design/data-type.md
index 308641c5531..1cb699b4556 100644
--- a/versioned_docs/version-2.1/table-design/data-type.md
+++ b/versioned_docs/version-2.1/table-design/data-type.md
@@ -68,12 +68,12 @@ The list of data types supported by Doris is as follows:
 
 ## [Aggregation data 
type](../sql-manual/sql-data-types/data-type-overview#aggregation-types)
 
-| Type name      | Storeage (bytes)| Description                               
                   |
-| -------------- | --------------- | 
------------------------------------------------------------ |
-| [HLL](../sql-manual/sql-data-types/aggregate/HLL)            | Variable 
Length | HLL stands for HyperLogLog, is a fuzzy deduplication. It performs 
better than Count Distinct when dealing with large datasets.   The error rate 
of HLL is typically around 1%, and sometimes it can reach 2%. HLL cannot be 
used as a key column, and the aggregation type is HLL_UNION when creating a 
table.  Users do not need to specify the length or default value as it is 
internally controlled based on the aggr [...]
-| [BITMAP](../sql-manual/sql-data-types/aggregate/BITMAP)         | Variable 
Length | BITMAP type can be used in Aggregate tables, Unique tables or 
Duplicate tables.  - When used in a Unique table or a Duplicate table, BITMAP 
must be employed as non-key columns.  - When used in an Aggregate table, BITMAP 
must also serve as non-key columns, and the aggregation type must be set to 
BITMAP_UNION during table creation.  Users do not need to specify the length or 
default value as it is interna [...]
-| [QUANTILE_STATE](../sql-manual/sql-data-types/aggregate/QUANTILE_STATE) | 
Variable Length | A type used to calculate approximate quantile values.  When 
loading, it performs pre-aggregation for the same keys with different values. 
When the number of values does not exceed 2048, it records all data in detail. 
When the number of values is greater than 2048, it employs the TDigest 
algorithm to aggregate (cluster) the data and store the centroid points after 
clustering.   QUANTILE_STATE can [...]
-| [AGG_STATE](../sql-manual/sql-data-types/aggregate/AGG_STATE)       | 
Variable Length | Aggregate function can only be used with state/merge/union 
function combiners.   AGG_STATE cannot be used as a key column. When creating a 
table, the signature of the aggregate function needs to be declared alongside.  
 Users do not need to specify the length or default value. The actual data 
storage size depends on the function's implementation. |
+| Type name                                                               | 
Storeage (bytes)| Description                                                  |
+|-------------------------------------------------------------------------| 
--------------- | ------------------------------------------------------------ |
+| [HLL](../sql-manual/sql-data-types/aggregate/HLL)                       | 
Variable Length | HLL stands for HyperLogLog, is a fuzzy deduplication. It 
performs better than Count Distinct when dealing with large datasets.   The 
error rate of HLL is typically around 1%, and sometimes it can reach 2%. HLL 
cannot be used as a key column, and the aggregation type is HLL_UNION when 
creating a table.  Users do not need to specify the length or default value as 
it is internally controlled based  [...]
+| [BITMAP](../sql-manual/sql-data-types/aggregate/BITMAP)                 | 
Variable Length | BITMAP type can be used in Aggregate tables, Unique tables or 
Duplicate tables.  - When used in a Unique table or a Duplicate table, BITMAP 
must be employed as non-key columns.  - When used in an Aggregate table, BITMAP 
must also serve as non-key columns, and the aggregation type must be set to 
BITMAP_UNION during table creation.  Users do not need to specify the length or 
default value as it is [...]
+| [QUANTILE_STATE](../sql-manual/sql-data-types/aggregate/QUANTILE-STATE) | 
Variable Length | A type used to calculate approximate quantile values.  When 
loading, it performs pre-aggregation for the same keys with different values. 
When the number of values does not exceed 2048, it records all data in detail. 
When the number of values is greater than 2048, it employs the TDigest 
algorithm to aggregate (cluster) the data and store the centroid points after 
clustering.   QUANTILE_STATE can [...]
+| [AGG_STATE](../sql-manual/sql-data-types/aggregate/AGG-STATE)           | 
Variable Length | Aggregate function can only be used with state/merge/union 
function combiners.   AGG_STATE cannot be used as a key column. When creating a 
table, the signature of the aggregate function needs to be declared alongside.  
 Users do not need to specify the length or default value. The actual data 
storage size depends on the function's implementation. |
 
 ## [IP types](../sql-manual/sql-data-types/data-type-overview#ip-types)
 
diff --git a/versioned_docs/version-2.1/table-design/overview.md 
b/versioned_docs/version-2.1/table-design/overview.md
index 6e5fa917c66..798bd42cd05 100644
--- a/versioned_docs/version-2.1/table-design/overview.md
+++ b/versioned_docs/version-2.1/table-design/overview.md
@@ -34,7 +34,7 @@ In Doris, table names are case-sensitive by default. You can 
configure [lower_ca
 
 ## Table property
 
-In Doris, the CREATE TABLE statement can specify [table 
properties](../sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-TABLE.md#properties),
 including:
+In Doris, the CREATE TABLE statement can specify [table 
properties](../sql-manual/sql-statements/table-and-view/table/CREATE-TABLE#properties),
 including:
 
 - **buckets**: Determines the distribution of data within the table.
 
diff --git a/versioned_docs/version-3.0/admin-manual/auth/user-privilege.md 
b/versioned_docs/version-3.0/admin-manual/auth/user-privilege.md
index 2fa9d7f4db7..e38e0770017 100644
--- a/versioned_docs/version-3.0/admin-manual/auth/user-privilege.md
+++ b/versioned_docs/version-3.0/admin-manual/auth/user-privilege.md
@@ -96,8 +96,8 @@ The default role cannot be deleted or assigned to others. 
When a user is deleted
 1. Create users: [CREATE 
USER](../../sql-manual/sql-statements/account-management/CREATE-USER)
 2. Alter users: [ALTER 
USER](../../sql-manual/sql-statements/account-management/ALTER-USER)
 3. Delete users: [DROP 
USER](../../sql-manual/sql-statements/account-management/DROP-USER)
-4. Authorization/Assign roles: 
[GRANT](../../sql-manual/sql-reference/Account-Management-Statements/GRANT.md)
-5. Withdrawal/REVOKE roles: 
[REVOKE](../../sql-manual/sql-reference/Account-Management-Statements/REVOKE.md)
+4. Authorization/Assign roles: 
[GRANT](../../sql-manual/sql-statements/account-management/GRANT-TO)
+5. Withdrawal/REVOKE roles: 
[REVOKE](../../sql-manual/sql-statements/account-management/REVOKE-FROM)
 6. Create role: [CREATE 
ROLE](../../sql-manual/sql-statements/account-management/CREATE-ROLE)
 7. Delete roles: [DROP 
ROLE](../../sql-manual/sql-statements/account-management/DROP-ROLE)
 8. View current user privileges: [SHOW 
GRANTS](../../sql-manual/sql-statements/account-management/SHOW-GRANTS)
@@ -293,4 +293,4 @@ Here are some usage scenarios of Doris privilege system.
 
 ## More help
 
-For more detailed syntax and best practices for permission management use, 
please refer to the 
[GRANTS](../../sql-manual/sql-reference/Account-Management-Statements/GRANT.md) 
command manual. Enter `HELP GRANTS` at the command line of the MySql client for 
more help information.
+For more detailed syntax and best practices for permission management use, 
please refer to the 
[GRANTS](../../sql-manual/sql-statements/account-management/GRANT-TO) command 
manual. Enter `HELP GRANTS` at the command line of the MySql client for more 
help information.
diff --git a/versioned_docs/version-3.0/admin-manual/config/be-config.md 
b/versioned_docs/version-3.0/admin-manual/config/be-config.md
index 3e896f05b3c..2a6ae232239 100644
--- a/versioned_docs/version-3.0/admin-manual/config/be-config.md
+++ b/versioned_docs/version-3.0/admin-manual/config/be-config.md
@@ -50,7 +50,7 @@ There are two ways to view the configuration items of BE:
 
 2. View by command
 
-   You can view the configuration items of the BE in the MySQL client with the 
following command,Concrete language law reference 
[SHOW-CONFIG](../../sql-manual/sql-statements/cluster-management/instance-management/SHOW-CONFIG.md):
+   You can view the configuration items of the BE in the MySQL client with the 
following command,Concrete language law reference 
[SHOW-CONFIG](../../sql-manual/sql-statements/cluster-management/instance-management/SHOW-FRONTEND-CONFIG):
 
     `SHOW BACKEND CONFIG;`
 
diff --git a/versioned_docs/version-3.0/admin-manual/config/fe-config.md 
b/versioned_docs/version-3.0/admin-manual/config/fe-config.md
index ef908e3cd47..2b0bb589b6f 100644
--- a/versioned_docs/version-3.0/admin-manual/config/fe-config.md
+++ b/versioned_docs/version-3.0/admin-manual/config/fe-config.md
@@ -48,7 +48,7 @@ There are two ways to view the configuration items of FE:
 
 2. View by command
 
-    After the FE is started, you can view the configuration items of the FE in 
the MySQL client with the following command,Concrete language law reference 
[SHOW-CONFIG](../../sql-manual/sql-statements/cluster-management/instance-management/SHOW-CONFIG.md):
+    After the FE is started, you can view the configuration items of the FE in 
the MySQL client with the following command,Concrete language law reference 
[SHOW-CONFIG](../../sql-manual/sql-statements/cluster-management/instance-management/SHOW-FRONTEND-CONFIG):
 
     `SHOW FRONTEND CONFIG;`
 
@@ -1494,7 +1494,7 @@ For some high-frequency load work, such as: INSERT, 
STREAMING LOAD, ROUTINE_LOAD
 
 #### `label_clean_interval_second`
 
-Default:1 * 3600  (1 hour)
+Default: 1 * 3600  (1 hour)
 
 Load label cleaner will run every *label_clean_interval_second* to clean the 
outdated jobs.
 
diff --git 
a/versioned_docs/version-3.0/admin-manual/workload-management/compute-group.md 
b/versioned_docs/version-3.0/admin-manual/workload-management/compute-group.md
index 0e0d0247cdd..e31257384c8 100644
--- 
a/versioned_docs/version-3.0/admin-manual/workload-management/compute-group.md
+++ 
b/versioned_docs/version-3.0/admin-manual/workload-management/compute-group.md
@@ -62,8 +62,8 @@ SHOW COMPUTE GROUPS;
 
 ## Adding Compute Groups
 
-Managing compute groups requires `OPERATOR` privilege, which controls node 
management permissions. For more details, please refer to [Privilege 
Management](../sql-manual/sql-statements/Account-Management-Statements/GRANT.md).
 By default, only the root account has the `OPERATOR` privilege, but it can be 
granted to other accounts using the `GRANT` command.
-To add a BE and assign it to a compute group, use the [Add 
BE](../sql-manual/sql-statements/Cluster-Management-Statements/ALTER-SYSTEM-ADD-BACKEND.md)
 command. For example:
+Managing compute groups requires `OPERATOR` privilege, which controls node 
management permissions. For more details, please refer to [Privilege 
Management](../../sql-manual/sql-statements/account-management/GRANT-TO). By 
default, only the root account has the `OPERATOR` privilege, but it can be 
granted to other accounts using the `GRANT` command.
+To add a BE and assign it to a compute group, use the [Add 
BE](../../sql-manual/sql-statements/cluster-management/instance-management/ADD-BACKEND)
 command. For example:
 
 ```sql
 ALTER SYSTEM ADD BACKEND 'host:9050' PROPERTIES ("tag.compute_group_name" = 
"new_group");
diff --git 
a/versioned_docs/version-3.0/compute-storage-decoupled/managing-compute-cluster.md
 
b/versioned_docs/version-3.0/compute-storage-decoupled/managing-compute-cluster.md
index f136be704d6..606af70b9d4 100644
--- 
a/versioned_docs/version-3.0/compute-storage-decoupled/managing-compute-cluster.md
+++ 
b/versioned_docs/version-3.0/compute-storage-decoupled/managing-compute-cluster.md
@@ -71,8 +71,8 @@ SHOW COMPUTE GROUPS;
 
 ## Adding Compute Groups
 
-Managing compute groups requires `OPERATOR` privilege, which controls node 
management permissions. For more details, please refer to [Privilege 
Management](../sql-manual/sql-statements/Account-Management-Statements/GRANT.md).
 By default, only the root account has the `OPERATOR` privilege, but it can be 
granted to other accounts using the `GRANT` command.
-To add a BE and assign it to a compute group, use the [Add 
BE](../sql-manual/sql-statements/Cluster-Management-Statements/ALTER-SYSTEM-ADD-BACKEND.md)
 command. For example:
+Managing compute groups requires `OPERATOR` privilege, which controls node 
management permissions. For more details, please refer to [Privilege 
Management](../sql-manual/sql-statements/account-management/GRANT-TO). By 
default, only the root account has the `OPERATOR` privilege, but it can be 
granted to other accounts using the `GRANT` command.
+To add a BE and assign it to a compute group, use the [Add 
BE](../sql-manual/sql-statements/cluster-management/instance-management/ADD-BACKEND)
 command. For example:
 
 ```sql
 ALTER SYSTEM ADD BACKEND 'host:9050' PROPERTIES ("tag.compute_group_name" = 
"new_group");
diff --git 
a/versioned_docs/version-3.0/data-operate/import/data-source/migrate-data-from-other-oltp.md
 
b/versioned_docs/version-3.0/data-operate/import/data-source/migrate-data-from-other-oltp.md
index 1c728c31f27..4450603cdc9 100644
--- 
a/versioned_docs/version-3.0/data-operate/import/data-source/migrate-data-from-other-oltp.md
+++ 
b/versioned_docs/version-3.0/data-operate/import/data-source/migrate-data-from-other-oltp.md
@@ -53,7 +53,7 @@ AS
 SELECT * FROM iceberg_catalog.iceberg_db.table1;
 ```
 
-For more details, refer to [Catalog Data 
Load](../../../lakehouse/catalog-overview.md#data-import)。
+For more details, refer to [Catalog Data 
Load](../../../data-operate/import/import-way/insert-into-manual)。
 
 ## Flink Doris Connector
 
diff --git a/versioned_docs/version-3.0/data-operate/import/file-format/csv.md 
b/versioned_docs/version-3.0/data-operate/import/file-format/csv.md
index 39d3b97dd7c..4ae430ab778 100644
--- a/versioned_docs/version-3.0/data-operate/import/file-format/csv.md
+++ b/versioned_docs/version-3.0/data-operate/import/file-format/csv.md
@@ -35,7 +35,7 @@ Doris supports the following methods to load CSV format data:
 - [Routine Load](../import-way/routine-load-manual)
 - [MySQL Load](../import-way/mysql-load-manual)
 - [INSERT INTO FROM S3 
TVF](../../sql-manual/sql-functions/table-valued-functions/s3)
-- [INSERT INTO FROM HDFS 
TVF](../../sql-manual/sql-functions/table-valued-functions/hdfs)
+- [INSERT INTO FROM HDFS 
TVF](../../../sql-manual/sql-functions/table-valued-functions/hdfs)
 
 ## Parameter Configuration
 
diff --git a/versioned_docs/version-3.0/data-operate/import/file-format/json.md 
b/versioned_docs/version-3.0/data-operate/import/file-format/json.md
index 3a05262a7d0..a184d2f1222 100644
--- a/versioned_docs/version-3.0/data-operate/import/file-format/json.md
+++ b/versioned_docs/version-3.0/data-operate/import/file-format/json.md
@@ -34,7 +34,7 @@ The following loading methods support JSON format data:
 - [Broker Load](../import-way/broker-load-manual.md)
 - [Routine Load](../import-way/routine-load-manual.md)
 - [INSERT INTO FROM S3 
TVF](../../sql-manual/sql-functions/table-valued-functions/s3)
-- [INSERT INTO FROM HDFS 
TVF](../../sql-manual/sql-functions/table-valued-functions/hdfs)
+- [INSERT INTO FROM HDFS 
TVF](../../../sql-manual/sql-functions/table-valued-functions/hdfs)
 
 ## Supported JSON Formats
 
diff --git 
a/versioned_docs/version-3.0/data-operate/import/import-way/broker-load-manual.md
 
b/versioned_docs/version-3.0/data-operate/import/import-way/broker-load-manual.md
index 42943e010ca..03f2dc7b929 100644
--- 
a/versioned_docs/version-3.0/data-operate/import/import-way/broker-load-manual.md
+++ 
b/versioned_docs/version-3.0/data-operate/import/import-way/broker-load-manual.md
@@ -28,7 +28,7 @@ Broker Load is initiated from the MySQL API. Doris will 
actively pull the data f
 
 Broker Load is suitable for scenarios where the source data is stored in 
remote storage systems, such as HDFS, and the data volume is relatively large.
 
-Direct reads from HDFS or S3 can also be imported through HDFS TVF or S3 TVF 
in the [Lakehouse/TVF](../../../lakehouse/file-analysis). The current "Insert 
Into" based on TVF is a synchronous import, while Broker Load is an 
asynchronous import method.
+Direct reads from HDFS or S3 can also be imported through HDFS TVF or S3 TVF 
in the [Lakehouse/TVF](../../../data-operate/import/file-format/csv#TVF Load). 
The current "Insert Into" based on TVF is a synchronous import, while Broker 
Load is an asynchronous import method.
 
 In early versions of Doris, both S3 Load and HDFS Load were implemented by 
connecting to specific Broker processes using `WITH BROKER`.
 In newer versions, S3 Load and HDFS Load have been optimized as the most 
commonly used import methods, and they no longer depend on an additional Broker 
process, though they still use syntax similar to Broker Load.
diff --git 
a/versioned_docs/version-3.0/data-operate/import/import-way/insert-into-values-manual.md
 
b/versioned_docs/version-3.0/data-operate/import/import-way/insert-into-values-manual.md
index efbd3cfa017..8abddb769e1 100644
--- 
a/versioned_docs/version-3.0/data-operate/import/import-way/insert-into-values-manual.md
+++ 
b/versioned_docs/version-3.0/data-operate/import/import-way/insert-into-values-manual.md
@@ -221,7 +221,7 @@ Query OK, 5 rows affected (0.04 sec)
 
 The invisible state of data is temporary, and the data will eventually become 
visible. 
 
-You can check the visibility status of a batch of data using the [SHOW 
TRANSACTION](../../../sql-manual/sql-statements/transaction/SHOW-TRANSACTION.md)
 statement.
+You can check the visibility status of a batch of data using the [SHOW 
TRANSACTION](../../../sql-manual/sql-statements/transaction/SHOW-TRANSACTION) 
statement.
 
 If the `TransactionStatus` column in the result is `visible`, it indicates 
that the data is visible.
 
diff --git 
a/versioned_docs/version-3.0/install/deploy-on-kubernetes/integrated-storage-compute/install-config-cluster.md
 
b/versioned_docs/version-3.0/install/deploy-on-kubernetes/integrated-storage-compute/install-config-cluster.md
index 5cecbe6aaf2..a5c611e420f 100644
--- 
a/versioned_docs/version-3.0/install/deploy-on-kubernetes/integrated-storage-compute/install-config-cluster.md
+++ 
b/versioned_docs/version-3.0/install/deploy-on-kubernetes/integrated-storage-compute/install-config-cluster.md
@@ -519,7 +519,7 @@ mysql -h 
ac4828493dgrftb884g67wg4tb68gyut-1137856348.us-east-1.elb.amazonaws.com
 ```
 
 ## Configuring the username and password for the management cluster
-Managing Doris nodes requires connecting to the live FE nodes via the MySQL 
protocol using a username and password for administrative operations. Doris 
implements [a permission management mechanism similar to 
RBAC](../../../admin-manual/auth/authentication-and-authorization?_highlight=rbac),
 where the user must have the 
[Node_priv](../../../admin-manual/auth/authentication-and-authorization.md#types-of-permissions)
 permission to perform node management. By default, the Doris Operator dep [...]
+Managing Doris nodes requires connecting to the live FE nodes via the MySQL 
protocol using a username and password for administrative operations. Doris 
implements [a permission management mechanism similar to 
RBAC](../../../admin-manual/auth/authentication-and-authorization), where the 
user must have the 
[Node_priv](../../../admin-manual/auth/authentication-and-authorization.md#types-of-permissions)
 permission to perform node management. By default, the Doris Operator deploys 
the cluster [...]
 
 The process of configuring the username and password can be divided into three 
scenarios:  
 - initializing the root user password during cluster deployment;
@@ -531,7 +531,7 @@ To secure access, you must configure a username and 
password with Node_Priv perm
 - Using a Kubernetes Secret
 
 ### Configuring the root user password during cluster deployment
-To set the root user's password securely, Doris supports encrypting it in 
[`fe.conf`](../../../admin-manual/config/fe-config?_highlight=initial_#initial_root_password)
 using a two-stage SHA-1 encryption process. Here's how to set up the password.
+To set the root user's password securely, Doris supports encrypting it in 
[`fe.conf`](../../../admin-manual/config/fe-config#initial_root_password) using 
a two-stage SHA-1 encryption process. Here's how to set up the password.
 
 #### Step 1: Generate the root encrypted password
 
@@ -688,7 +688,7 @@ For more details on creating users, setting passwords, and 
granting permissions,
 #### Step 3: Configure DorisCluster  
 - Using environment variables
 
-  Directly configure the new user’s name and password in the DorisCluster 
resource:
+  Directly configure the new user's name and password in the DorisCluster 
resource:
   ```yaml
   spec:
     adminUser:
diff --git 
a/versioned_docs/version-3.0/install/deploy-on-kubernetes/separating-storage-compute/config-cg.md
 
b/versioned_docs/version-3.0/install/deploy-on-kubernetes/separating-storage-compute/config-cg.md
index 5020f4c54fd..0151ddc1654 100644
--- 
a/versioned_docs/version-3.0/install/deploy-on-kubernetes/separating-storage-compute/config-cg.md
+++ 
b/versioned_docs/version-3.0/install/deploy-on-kubernetes/separating-storage-compute/config-cg.md
@@ -74,7 +74,7 @@ spec:
       memory: 8Gi
 ```
 
-Update the above configuration in the required [DorisDisaggregatedCluster 
resource](install-quickstart.md#step-3-deploy-the-compute-storage-decoupled-cluster).
+Update the above configuration in the required [DorisDisaggregatedCluster 
resource](../../../gettingStarted/quick-start).
 
 ## Configure Cache persistent
 
@@ -82,7 +82,7 @@ In the default deployment, BE services use Kubernetes' 
[EmptyDir](https://kubern
 
 1. Create a custom ConfigMap containing the startup information.
 
-   In the default deployment, each compute group’s BE service starts using the 
default configuration file from the image. To persist cache data, a custom 
startup configuration is needed. Doris Operator uses Kubernetes' ConfigMap to 
mount the startup configuration file.
+   In the default deployment, each compute group's BE service starts using the 
default configuration file from the image. To persist cache data, a custom 
startup configuration is needed. Doris Operator uses Kubernetes' ConfigMap to 
mount the startup configuration file.
 
    Below is an example of a ConfigMap that a BE service can use:
 
@@ -100,7 +100,7 @@ In the default deployment, BE services use Kubernetes' 
[EmptyDir](https://kubern
        file_cache_path = 
[{"path":"/mnt/disk1/doris_cloud/file_cache","total_size":107374182400,"query_limit":107374182400}]
    ```
 
-   For decoupled compute-storage clusters, the BE service's startup 
configuration must include file_cache_path. For formatting details, refer to 
the [Decoupled Compute-Storage Configuration be.conf 
section](../../../../compute-storage-decoupled/compilation-and-deployment.md#541-configure-beconf).
 In the example above, a persistent cache is configured at the directory 
`/mnt/disk1/doris_cloud/file_cache`, with a total persistent capacity of 100Gi 
and a query cache limit of 100Gi.
+   For decoupled compute-storage clusters, the BE service's startup 
configuration must include file_cache_path. For formatting details, refer to 
the [Decoupled Compute-Storage Configuration be.conf 
section](../../../compute-storage-decoupled/compilation-and-deployment). In the 
example above, a persistent cache is configured at the directory 
`/mnt/disk1/doris_cloud/file_cache`, with a total persistent capacity of 100Gi 
and a query cache limit of 100Gi.
 
 2. Deploy the ConfigMap.
 
diff --git 
a/versioned_docs/version-3.0/install/deploy-on-kubernetes/separating-storage-compute/config-fe.md
 
b/versioned_docs/version-3.0/install/deploy-on-kubernetes/separating-storage-compute/config-fe.md
index ebee72b5602..b133777f86b 100644
--- 
a/versioned_docs/version-3.0/install/deploy-on-kubernetes/separating-storage-compute/config-fe.md
+++ 
b/versioned_docs/version-3.0/install/deploy-on-kubernetes/separating-storage-compute/config-fe.md
@@ -43,11 +43,11 @@ spec:
       memory: 8Gi
 ```
 
-Update the above configuration in the [DorisDisaggregatedCluster 
resource](install-quickstart.md#step-3-deploy-the-compute-storage-decoupled-cluster).
+Update the above configuration in the [DorisDisaggregatedCluster 
resource](../../../gettingStarted/quick-start).
 
 ### Configuring storage resources
 
-The FE service in the storage-computation separation cluster is a stateful 
service. When deployed on Kubernetes, it requires persistent storage for 
metadata. Doris Operator automatically mounts the persistent storage based on 
the configuration of the metadata storage directory and the storage template. 
Add the following configuration to the [DorisDisaggregatedCluster 
resource](install-quickstart.md#step-3-deploy-the-compute-storage-decoupled-cluster):
+The FE service in the storage-computation separation cluster is a stateful 
service. When deployed on Kubernetes, it requires persistent storage for 
metadata. Doris Operator automatically mounts the persistent storage based on 
the configuration of the metadata storage directory and the storage template. 
Add the following configuration to the [DorisDisaggregatedCluster 
resource](../../../gettingStarted/quick-start):
 
 ```yaml
 spec:
@@ -116,7 +116,7 @@ Doris Operator uses Kubernetes' ConfigMap to mount the 
startup configuration.
        configMaps:
        - name: fe-configmap
    ```
-   In the `DorisDisaggregatedCluster` resource, the `configMaps` field is an 
array, with each element’s `name` representing the name of the ConfigMap in the 
current namespace.
+   In the `DorisDisaggregatedCluster` resource, the `configMaps` field is an 
array, with each element's `name` representing the name of the ConfigMap in the 
current namespace.
   
 :::tip Tip  
 1. In Kubernetes deployments, no need to manually set `meta_service_endpoint`, 
`deploy_mode`, or `cluster_id` in the startup configuration. These are 
automatically handled by Doris Operator services.  
diff --git 
a/versioned_docs/version-3.0/install/deploy-on-kubernetes/separating-storage-compute/config-ms.md
 
b/versioned_docs/version-3.0/install/deploy-on-kubernetes/separating-storage-compute/config-ms.md
index f5ddfb6d32c..b84e6d04899 100644
--- 
a/versioned_docs/version-3.0/install/deploy-on-kubernetes/separating-storage-compute/config-ms.md
+++ 
b/versioned_docs/version-3.0/install/deploy-on-kubernetes/separating-storage-compute/config-ms.md
@@ -61,7 +61,7 @@ spec:
       memory: 4Gi
 ```
 
-Update the modified configuration to the [metadata management resources for 
storage and computing separation that need to be 
deployed](install-quickstart.md#step-2-quickly-deploy-a-storage-and-computing-separation-cluster).
+Update the modified configuration to the [metadata management resources for 
storage and computing separation that need to be 
deployed](../../../gettingStarted/quick-start).
 
 ## Configure FDB
 - Use ConfigMap
@@ -140,7 +140,7 @@ spec:
         mountPath: /etc/doris
 ```
 
-In actual deployment, configure the name and namespace of ConfigMap as needed, 
and configure the configuration information in the 
[storage-and-computing-separation metadata management 
resource](install-quickstart#step-2-quickly-deploy-a-storage-and-computing-separation-cluster)
 to be deployed according to the above sample format. The startup configuration 
file used by the MS service is named `doris_cloud.conf`, so the key of the 
ConfigMap for mounting the startup configuration must also  [...]
+In actual deployment, configure the name and namespace of ConfigMap as needed, 
and configure the configuration information in the 
[storage-and-computing-separation metadata management 
resource](../../../gettingStarted/quick-start) to be deployed according to the 
above sample format. The startup configuration file used by the MS service is 
named `doris_cloud.conf`, so the key of the ConfigMap for mounting the startup 
configuration must also be `doris_cloud.conf`. The startup configuration [...]
 
 :::tip Tip
 MS services need to use FDB as the backend metadata storage. FDB services must 
be deployed when deploying MS services.
diff --git a/versioned_docs/version-3.0/lakehouse/datalake-analytics/hudi.md 
b/versioned_docs/version-3.0/lakehouse/datalake-analytics/hudi.md
index d3f91cb0418..3a9ec7c12c7 100644
--- a/versioned_docs/version-3.0/lakehouse/datalake-analytics/hudi.md
+++ b/versioned_docs/version-3.0/lakehouse/datalake-analytics/hudi.md
@@ -68,7 +68,7 @@ Same as that in Hive Catalogs. See the relevant section in 
[Hive](./hive.md).
 Spark will create the read optimize table with `_ro` suffix when generating 
hudi mor table. Doris will skip the log files when reading optimize table. 
Doris does not determine whether a table is read optimize by the `_ro` suffix 
instead of the hive inputformat. Users can observe whether the inputformat of 
the 'cow/mor/read optimize' table is the same through the `SHOW CREATE TABLE` 
command. In addition, Doris supports adding hoodie related configurations to 
catalog properties, which are  [...]
 
 ## Query Optimization
-Doris uses the parquet native reader to read the data files of the COW table, 
and uses the Java SDK (By calling hudi-bundle through JNI) to read the data 
files of the MOR table. In `upsert` scenario, there may still remains base 
files that have not been updated in the MOR table, which can be read through 
the parquet native reader. Users can view the execution plan of hudi scan 
through the [explain](../../query/query-analysis/query-analysis.md) command, 
where `hudiNativeReadSplits` indica [...]
+Doris uses the parquet native reader to read the data files of the COW table, 
and uses the Java SDK (By calling hudi-bundle through JNI) to read the data 
files of the MOR table. In `upsert` scenario, there may still remains base 
files that have not been updated in the MOR table, which can be read through 
the parquet native reader. Users can view the execution plan of hudi scan 
through the [explain](../../sql-manual/sql-statements/data-query/EXPLAIN) 
command, where `hudiNativeReadSplits`  [...]
 ```
 |0:VHUDI_SCAN_NODE                                                             
|
 |      table: minbatch_mor_rt                                                  
|
diff --git a/versioned_docs/version-3.0/lakehouse/external-statistics.md 
b/versioned_docs/version-3.0/lakehouse/external-statistics.md
index e4732114dd6..8c5754afe37 100644
--- a/versioned_docs/version-3.0/lakehouse/external-statistics.md
+++ b/versioned_docs/version-3.0/lakehouse/external-statistics.md
@@ -26,7 +26,7 @@ under the License.
 
 # External Table Statistics
 
-The collection method and content of external table statistical information 
are basically the same as those of internal tables. For detailed information, 
please refer to the [statistical information](../query/nereids/statistics.md). 
After version 2.0.3, Hive tables support automatic and sampling collection.
+The collection method and content of external table statistical information 
are basically the same as those of internal tables. For detailed information, 
please refer to the [statistical 
information](../admin-manual/system-tables/information_schema/statistics). 
After version 2.0.3, Hive tables support automatic and sampling collection.
 
 # Note
 
diff --git a/versioned_docs/version-3.0/lakehouse/lakehouse-overview.md 
b/versioned_docs/version-3.0/lakehouse/lakehouse-overview.md
index c2939265a20..4b51b278aae 100644
--- a/versioned_docs/version-3.0/lakehouse/lakehouse-overview.md
+++ b/versioned_docs/version-3.0/lakehouse/lakehouse-overview.md
@@ -236,7 +236,7 @@ mysql> SHOW DATABASES;
 +-----------+
 ```
 
-Syntax help: [SWITCH](../sql-manual/sql-statements/session/context/SWITCH/)
+Syntax help: 
[SWITCH](../sql-manual/sql-statements/session/context/SWITCH-CATALOG)
 
 5. Use the Catalog
 
diff --git 
a/versioned_docs/version-3.0/query-acceleration/materialized-view/async-materialized-view/faq.md
 
b/versioned_docs/version-3.0/query-acceleration/materialized-view/async-materialized-view/faq.md
index 80381a2c09e..351ef1b14a1 100644
--- 
a/versioned_docs/version-3.0/query-acceleration/materialized-view/async-materialized-view/faq.md
+++ 
b/versioned_docs/version-3.0/query-acceleration/materialized-view/async-materialized-view/faq.md
@@ -88,7 +88,7 @@ Unable to find a suitable base table for partitioning
 
 This error typically indicates that the SQL definition of the materialized 
view and the choice of partitioning fields do not allow incremental partition 
updates, resulting in an error during the creation of the partitioned 
materialized view.
 
-- For incremental partition updates, the materialized view's SQL definition 
and partitioning field selection must meet specific requirements. See 
[Materialized View Refresh 
Modes](../../../sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-ASYNC-MATERIALIZED-VIEW#refreshmethod)
 for details.
+- For incremental partition updates, the materialized view's SQL definition 
and partitioning field selection must meet specific requirements. See 
[Materialized View Refresh 
Modes](../../../sql-manual/sql-statements/table-and-view/async-materialized-view/REFRESH-MATERIALIZED-VIEW)
 for details.
 
 - The latest code can indicate the reason for partition build failure, with 
error summaries and descriptions provided in Appendix 2.
 
diff --git 
a/versioned_docs/version-3.0/query-acceleration/materialized-view/async-materialized-view/functions-and-demands.md
 
b/versioned_docs/version-3.0/query-acceleration/materialized-view/async-materialized-view/functions-and-demands.md
index 153410d1730..716b6ed25c5 100644
--- 
a/versioned_docs/version-3.0/query-acceleration/materialized-view/async-materialized-view/functions-and-demands.md
+++ 
b/versioned_docs/version-3.0/query-acceleration/materialized-view/async-materialized-view/functions-and-demands.md
@@ -1241,7 +1241,7 @@ NeedRefreshPartitions: 
["p_20231023_20231024","p_20231019_20231020","p_20231020_
 - If set to 10, allows up to 10 seconds of delay between materialized view and 
base table data. The materialized view can be used for transparent rewriting 
during this 10-second window.
 :::
 
-For more details, see 
[TASKS](../../../sql-manual/sql-functions/table-valued-functions/tasks?_highlight=task)
+For more details, see 
[TASKS](../../../sql-manual/sql-functions/table-valued-functions/tasks)
 
 ### Querying Materialized View Jobs
 
diff --git 
a/versioned_docs/version-3.0/query-acceleration/materialized-view/async-materialized-view/use-guide.md
 
b/versioned_docs/version-3.0/query-acceleration/materialized-view/async-materialized-view/use-guide.md
index 8596daa615b..2fa76fc0bb6 100644
--- 
a/versioned_docs/version-3.0/query-acceleration/materialized-view/async-materialized-view/use-guide.md
+++ 
b/versioned_docs/version-3.0/query-acceleration/materialized-view/async-materialized-view/use-guide.md
@@ -52,7 +52,7 @@ When the following conditions are met, it is recommended to 
create partitioned m
 
 - The tables used by the materialized view, except for the partitioned table, 
do not change frequently.
 
-- The definition SQL of the materialized view and the partition field meet the 
requirements of partition derivation, that is, meet the requirements of 
partition incremental update. Detailed requirements can be found in 
[CREATE-ASYNC-MATERIALIZED-VIEW](../../../sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-ASYNC-MATERIALIZED-VIEW/#refreshmethod)
+- The definition SQL of the materialized view and the partition field meet the 
requirements of partition derivation, that is, meet the requirements of 
partition incremental update. Detailed requirements can be found in 
[CREATE-ASYNC-MATERIALIZED-VIEW](../../../sql-manual/sql-statements/table-and-view/async-materialized-view/REFRESH-MATERIALIZED-VIEW)
 
 - The number of partitions in the materialized view is not large, as too many 
partitions will lead to excessively long partition materialized view 
construction time.
 
@@ -62,7 +62,7 @@ If partitioned materialized views cannot be constructed, you 
can consider choosi
 
 ## Common Usage of Partitioned Materialized Views
 
-When the materialized view's base table data volume is large and the base 
table is a partitioned table, if the materialized view's definition SQL and 
partition fields meet the requirements of partition derivation, this scenario 
is suitable for building partitioned materialized views. For detailed 
requirements of partition derivation, refer to 
[CREATE-ASYNC-MATERIALIZED-VIEW](../../../sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-ASYNC-MATERIALIZED-VIEW/#refreshmethod
 [...]
+When the materialized view's base table data volume is large and the base 
table is a partitioned table, if the materialized view's definition SQL and 
partition fields meet the requirements of partition derivation, this scenario 
is suitable for building partitioned materialized views. For detailed 
requirements of partition derivation, refer to 
[CREATE-ASYNC-MATERIALIZED-VIEW](../../../sql-manual/sql-statements/table-and-view/async-materialized-view/REFRESH-MATERIALIZED-VIEW)
 and [Async Ma [...]
 
 The materialized view's partitions are created following the base table's 
partition mapping, generally having a 1:1 or 1:n relationship with the base 
table's partitions.
 
diff --git 
a/versioned_docs/version-3.0/query-data/udf/java-user-defined-function.md 
b/versioned_docs/version-3.0/query-data/udf/java-user-defined-function.md
index 96931bb2621..7250e717835 100644
--- a/versioned_docs/version-3.0/query-data/udf/java-user-defined-function.md
+++ b/versioned_docs/version-3.0/query-data/udf/java-user-defined-function.md
@@ -377,7 +377,7 @@ UDTF is supported starting from Doris version 3.0.
     }
     ```
 
-2. Register and create the Java-UDTF function in Doris. Two UDTF functions 
will be registered. Table functions in Doris may exhibit different behaviors 
due to the `_outer` suffix. For more details, refer to [OUTER 
combinator](../../sql-manual/sql-functions/table-functions/explode-numbers-outer.md).
+2. Register and create the Java-UDTF function in Doris. Two UDTF functions 
will be registered. Table functions in Doris may exhibit different behaviors 
due to the `_outer` suffix. For more details, refer to [OUTER 
combinator](../../sql-manual/sql-functions/table-functions/explode-numbers).
 For more syntax details, please refer to [CREATE 
FUNCTION](../../sql-manual/sql-statements/function/CREATE-FUNCTION).
 
     ```sql
diff --git 
a/versioned_docs/version-3.0/table-design/data-partitioning/dynamic-partitioning.md
 
b/versioned_docs/version-3.0/table-design/data-partitioning/dynamic-partitioning.md
index 407503f1710..2d09145253e 100644
--- 
a/versioned_docs/version-3.0/table-design/data-partitioning/dynamic-partitioning.md
+++ 
b/versioned_docs/version-3.0/table-design/data-partitioning/dynamic-partitioning.md
@@ -107,7 +107,7 @@ ALTER TABLE test_partition SET (
 
 ### 查看动态分区调度情况
 
-通过 
[SHOW-DYNAMIC-PARTITION](../../sql-manual/sql-statements/Show-Statements/SHOW-DYNAMIC-PARTITION)
 可以查看当前数据库下,所有动态分区表的调度情况:
+通过 
[SHOW-DYNAMIC-PARTITION](../../sql-manual/sql-statements/table-and-view/table/SHOW-DYNAMIC-PARTITION-TABLES)
 可以查看当前数据库下,所有动态分区表的调度情况:
 
 ```sql
 SHOW DYNAMIC PARTITION TABLES;
diff --git a/versioned_docs/version-3.0/table-design/data-type.md 
b/versioned_docs/version-3.0/table-design/data-type.md
index 308641c5531..1cb699b4556 100644
--- a/versioned_docs/version-3.0/table-design/data-type.md
+++ b/versioned_docs/version-3.0/table-design/data-type.md
@@ -68,12 +68,12 @@ The list of data types supported by Doris is as follows:
 
 ## [Aggregation data 
type](../sql-manual/sql-data-types/data-type-overview#aggregation-types)
 
-| Type name      | Storeage (bytes)| Description                               
                   |
-| -------------- | --------------- | 
------------------------------------------------------------ |
-| [HLL](../sql-manual/sql-data-types/aggregate/HLL)            | Variable 
Length | HLL stands for HyperLogLog, is a fuzzy deduplication. It performs 
better than Count Distinct when dealing with large datasets.   The error rate 
of HLL is typically around 1%, and sometimes it can reach 2%. HLL cannot be 
used as a key column, and the aggregation type is HLL_UNION when creating a 
table.  Users do not need to specify the length or default value as it is 
internally controlled based on the aggr [...]
-| [BITMAP](../sql-manual/sql-data-types/aggregate/BITMAP)         | Variable 
Length | BITMAP type can be used in Aggregate tables, Unique tables or 
Duplicate tables.  - When used in a Unique table or a Duplicate table, BITMAP 
must be employed as non-key columns.  - When used in an Aggregate table, BITMAP 
must also serve as non-key columns, and the aggregation type must be set to 
BITMAP_UNION during table creation.  Users do not need to specify the length or 
default value as it is interna [...]
-| [QUANTILE_STATE](../sql-manual/sql-data-types/aggregate/QUANTILE_STATE) | 
Variable Length | A type used to calculate approximate quantile values.  When 
loading, it performs pre-aggregation for the same keys with different values. 
When the number of values does not exceed 2048, it records all data in detail. 
When the number of values is greater than 2048, it employs the TDigest 
algorithm to aggregate (cluster) the data and store the centroid points after 
clustering.   QUANTILE_STATE can [...]
-| [AGG_STATE](../sql-manual/sql-data-types/aggregate/AGG_STATE)       | 
Variable Length | Aggregate function can only be used with state/merge/union 
function combiners.   AGG_STATE cannot be used as a key column. When creating a 
table, the signature of the aggregate function needs to be declared alongside.  
 Users do not need to specify the length or default value. The actual data 
storage size depends on the function's implementation. |
+| Type name                                                               | 
Storeage (bytes)| Description                                                  |
+|-------------------------------------------------------------------------| 
--------------- | ------------------------------------------------------------ |
+| [HLL](../sql-manual/sql-data-types/aggregate/HLL)                       | 
Variable Length | HLL stands for HyperLogLog, is a fuzzy deduplication. It 
performs better than Count Distinct when dealing with large datasets.   The 
error rate of HLL is typically around 1%, and sometimes it can reach 2%. HLL 
cannot be used as a key column, and the aggregation type is HLL_UNION when 
creating a table.  Users do not need to specify the length or default value as 
it is internally controlled based  [...]
+| [BITMAP](../sql-manual/sql-data-types/aggregate/BITMAP)                 | 
Variable Length | BITMAP type can be used in Aggregate tables, Unique tables or 
Duplicate tables.  - When used in a Unique table or a Duplicate table, BITMAP 
must be employed as non-key columns.  - When used in an Aggregate table, BITMAP 
must also serve as non-key columns, and the aggregation type must be set to 
BITMAP_UNION during table creation.  Users do not need to specify the length or 
default value as it is [...]
+| [QUANTILE_STATE](../sql-manual/sql-data-types/aggregate/QUANTILE-STATE) | 
Variable Length | A type used to calculate approximate quantile values.  When 
loading, it performs pre-aggregation for the same keys with different values. 
When the number of values does not exceed 2048, it records all data in detail. 
When the number of values is greater than 2048, it employs the TDigest 
algorithm to aggregate (cluster) the data and store the centroid points after 
clustering.   QUANTILE_STATE can [...]
+| [AGG_STATE](../sql-manual/sql-data-types/aggregate/AGG-STATE)           | 
Variable Length | Aggregate function can only be used with state/merge/union 
function combiners.   AGG_STATE cannot be used as a key column. When creating a 
table, the signature of the aggregate function needs to be declared alongside.  
 Users do not need to specify the length or default value. The actual data 
storage size depends on the function's implementation. |
 
 ## [IP types](../sql-manual/sql-data-types/data-type-overview#ip-types)
 
diff --git a/versioned_docs/version-3.0/table-design/overview.md 
b/versioned_docs/version-3.0/table-design/overview.md
index 6e5fa917c66..798bd42cd05 100644
--- a/versioned_docs/version-3.0/table-design/overview.md
+++ b/versioned_docs/version-3.0/table-design/overview.md
@@ -34,7 +34,7 @@ In Doris, table names are case-sensitive by default. You can 
configure [lower_ca
 
 ## Table property
 
-In Doris, the CREATE TABLE statement can specify [table 
properties](../sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-TABLE.md#properties),
 including:
+In Doris, the CREATE TABLE statement can specify [table 
properties](../sql-manual/sql-statements/table-and-view/table/CREATE-TABLE#properties),
 including:
 
 - **buckets**: Determines the distribution of data within the table.
 


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]


Reply via email to