This is an automated email from the ASF dual-hosted git repository.

jiafengzheng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/doris-website.git


The following commit(s) were added to refs/heads/master by this push:
     new ae4125aa154 fix
ae4125aa154 is described below

commit ae4125aa1545ea7e94b9a104fa1e29f8d452fb77
Author: jiafeng.zhang <zhang...@gmail.com>
AuthorDate: Tue Oct 25 19:27:57 2022 +0800

    fix
---
 docs/admin-manual/cluster-management/elastic-expansion.md         | 2 +-
 docs/admin-manual/cluster-management/upgrade.md                   | 2 +-
 docs/admin-manual/http-actions/fe/table-schema-action.md          | 2 +-
 docs/admin-manual/maint-monitor/disk-capacity.md                  | 2 +-
 docs/advanced/alter-table/schema-change.md                        | 4 ++--
 docs/data-operate/import/import-scenes/external-storage-load.md   | 8 ++++----
 docs/data-operate/import/import-scenes/jdbc-load.md               | 4 ++--
 docs/data-table/basic-usage.md                                    | 6 +++---
 docs/ecosystem/external-table/hive-bitmap-udf.md                  | 4 ++--
 .../Data-Definition-Statements/Alter/ALTER-TABLE-PARTITION.md     | 2 +-
 .../Data-Manipulation-Statements/Load/STREAM-LOAD.md              | 8 ++++----
 .../current/admin-manual/cluster-management/elastic-expansion.md  | 2 +-
 .../current/admin-manual/cluster-management/upgrade.md            | 2 +-
 .../current/admin-manual/http-actions/fe/table-schema-action.md   | 2 +-
 .../data-operate/import/import-scenes/external-storage-load.md    | 8 ++++----
 .../current/ecosystem/external-table/hive-bitmap-udf.md           | 4 ++--
 .../administrator-guide/operation/metadata-operation.md           | 4 ++--
 .../administrator-guide/alter-table/alter-table-rollup.md         | 4 ++--
 .../administrator-guide/operation/metadata-operation.md           | 4 ++--
 versioned_docs/version-0.15/extending-doris/logstash.md           | 2 +-
 .../version-0.15/extending-doris/udf/user-defined-function.md     | 2 +-
 21 files changed, 39 insertions(+), 39 deletions(-)

diff --git a/docs/admin-manual/cluster-management/elastic-expansion.md 
b/docs/admin-manual/cluster-management/elastic-expansion.md
index e8fd4525298..b10b7f57c68 100644
--- a/docs/admin-manual/cluster-management/elastic-expansion.md
+++ b/docs/admin-manual/cluster-management/elastic-expansion.md
@@ -140,7 +140,7 @@ DECOMMISSION clause:
 >              ```CANCEL ALTER SYSTEM DECOMMISSION BACKEND 
 > "be_host:be_heartbeat_service_port";```
 >      The order was cancelled. When cancelled, the data on the BE will 
 > maintain the current amount of data remaining. Follow-up Doris re-load 
 > balancing
 
-**For expansion and scaling of BE nodes in multi-tenant deployment 
environments, please refer to the [Multi-tenant Design 
Document](../multi-tenant).**
+**For expansion and scaling of BE nodes in multi-tenant deployment 
environments, please refer to the [Multi-tenant Design 
Document](../../multi-tenant).**
 
 ## Broker Expansion and Shrinkage
 
diff --git a/docs/admin-manual/cluster-management/upgrade.md 
b/docs/admin-manual/cluster-management/upgrade.md
index a04a69a5394..99b7cbea5dd 100644
--- a/docs/admin-manual/cluster-management/upgrade.md
+++ b/docs/admin-manual/cluster-management/upgrade.md
@@ -113,7 +113,7 @@ Therefore, it is recommended to upgrade some nodes and 
observe the business oper
 **Illegal rollback operation may cause data loss and damage.** 
 
 ## Documentation
-1. [Doris metadata design document](../../../community/design/metadata-design) 
+1. [Doris metadata design document](/community/design/metadata-design) 
 2. [Metadata Operations and 
Maintenance](../../admin-manual/maint-monitor/metadata-operation.md) 
 3. [Data replica 
management](../../admin-manual/maint-monitor/tablet-repair-and-balance.md)
 4. [Installation Deployment Document](../../install/install-deploy.md)
diff --git a/docs/admin-manual/http-actions/fe/table-schema-action.md 
b/docs/admin-manual/http-actions/fe/table-schema-action.md
index c2aaaaf7cc7..122b7ed06d9 100644
--- a/docs/admin-manual/http-actions/fe/table-schema-action.md
+++ b/docs/admin-manual/http-actions/fe/table-schema-action.md
@@ -97,7 +97,7 @@ None
        "count": 0
 }
 ```
-Note: The difference is that the `http` method returns more `aggregation_type` 
fields than the `http v2` method. The `http v2` is enabled by setting 
`enable_http_server_v2`. For detailed parameter descriptions, see [fe parameter 
settings](../admin-manual/config/fe-config)
+Note: The difference is that the `http` method returns more `aggregation_type` 
fields than the `http v2` method. The `http v2` is enabled by setting 
`enable_http_server_v2`. For detailed parameter descriptions, see [fe parameter 
settings](../../../admin-manual/config/fe-config)
 
 ## Examples
 
diff --git a/docs/admin-manual/maint-monitor/disk-capacity.md 
b/docs/admin-manual/maint-monitor/disk-capacity.md
index 59412f846b2..1f377faa897 100644
--- a/docs/admin-manual/maint-monitor/disk-capacity.md
+++ b/docs/admin-manual/maint-monitor/disk-capacity.md
@@ -162,6 +162,6 @@ When the disk capacity is higher than High Watermark or 
even Flood Stage, many o
 
         ```rm -rf data/0/12345/```
 
-    * Delete tablet metadata (refer to [Tablet metadata management 
tool](./tablet-meta-tool.md))
+    * Delete tablet metadata (refer to [Tablet metadata management 
tool](../../tablet-meta-tool))
 
         ```./lib/meta_tool --operation=delete_header 
--root_path=/path/to/root_path --tablet_id=12345 --schema_hash= 352781111```
diff --git a/docs/advanced/alter-table/schema-change.md 
b/docs/advanced/alter-table/schema-change.md
index 23f9fa77084..bf0fa481c95 100644
--- a/docs/advanced/alter-table/schema-change.md
+++ b/docs/advanced/alter-table/schema-change.md
@@ -68,7 +68,7 @@ The basic process of executing a Schema Change is to generate 
a copy of the inde
 Before starting the conversion of historical data, Doris will obtain a latest 
transaction ID. And wait for all import transactions before this Transaction ID 
to complete. This Transaction ID becomes a watershed. This means that Doris 
guarantees that all import tasks after the watershed will generate data for 
both the original Index and the new Index. In this way, when the historical 
data conversion is completed, the data in the new Index can be guaranteed to be 
complete.
 ## Create Job
 
-The specific syntax for creating a Schema Change can be found in the help 
[ALTER TABLE 
COLUMN](../../sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-COLUMN.md)
 for the description of the Schema Change section .
+The specific syntax for creating a Schema Change can be found in the help 
[ALTER TABLE 
COLUMN](../../sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-COLUMN)
 for the description of the Schema Change section .
 
 The creation of Schema Change is an asynchronous process. After the job is 
submitted successfully, the user needs to view the job progress through the 
`SHOW ALTER TABLE COLUMN` command.
 ## View Job
@@ -282,5 +282,5 @@ SHOW ALTER TABLE COLUMN\G;
 
 ## More Help
 
-For more detailed syntax and best practices used by Schema Change, see [ALTER 
TABLE 
COLUMN](../../sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-COLUMN.md)
 command manual, you can also enter `HELP ALTER TABLE COLUMN` in the MySql 
client command line for more help information.
+For more detailed syntax and best practices used by Schema Change, see [ALTER 
TABLE 
COLUMN](../../sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-COLUMN)
 command manual, you can also enter `HELP ALTER TABLE COLUMN` in the MySql 
client command line for more help information.
 
diff --git a/docs/data-operate/import/import-scenes/external-storage-load.md 
b/docs/data-operate/import/import-scenes/external-storage-load.md
index 50da5abc289..2475a2e5a31 100644
--- a/docs/data-operate/import/import-scenes/external-storage-load.md
+++ b/docs/data-operate/import/import-scenes/external-storage-load.md
@@ -35,7 +35,7 @@ Upload the files to be imported to HDFS. For specific 
commands, please refer to
 
 ### start import
 
-Hdfs load creates an import statement. The import method is basically the same 
as [Broker Load](../../../data-operate/import/import-way/broker-load-manual), 
only need to `WITH BROKER broker_name () ` statement with the following
+Hdfs load creates an import statement. The import method is basically the same 
as [Broker Load](/docs/data-operate/import/import-way/broker-load-manual), only 
need to `WITH BROKER broker_name () ` statement with the following
 
 ```
   LOAD LABEL db_name.label_name 
@@ -78,11 +78,11 @@ Hdfs load creates an import statement. The import method is 
basically the same a
        );
    ```
 
-   For parameter introduction, please refer to [Broker 
Load](../../../data-operate/import/import-way/broker-load-manual), HA cluster 
creation syntax, view through `HELP BROKER LOAD`
+   For parameter introduction, please refer to [Broker 
Load](/docs/data-operate/import/import-way/broker-load-manual), HA cluster 
creation syntax, view through `HELP BROKER LOAD`
 
 3. Check import status
 
-   Broker load is an asynchronous import method. The specific import results 
can be accessed through [SHOW 
LOAD](../../../sql-manual/sql-reference/Show-Statements/SHOW-LOAD) command to 
view
+   Broker load is an asynchronous import method. The specific import results 
can be accessed through [SHOW 
LOAD](/docs/sql-manual/sql-reference/Show-Statements/SHOW-LOAD) command to view
    
    ```
    mysql> show load order by createtime desc limit 1\G;
@@ -128,7 +128,7 @@ This document mainly introduces how to import data stored 
in AWS S3. It also sup
 Other cloud storage systems can find relevant information compatible with S3 
in corresponding documents
 
 ### Start Loading
-Like [Broker Load](/data-operate/import/import-way/broker-load-manual)  just 
replace `WITH BROKER broker_name ()` with
+Like [Broker Load](/docs/data-operate/import/import-way/broker-load-manual)  
just replace `WITH BROKER broker_name ()` with
 ```
     WITH S3
     (
diff --git a/docs/data-operate/import/import-scenes/jdbc-load.md 
b/docs/data-operate/import/import-scenes/jdbc-load.md
index b52621748cc..f3f4697d8af 100644
--- a/docs/data-operate/import/import-scenes/jdbc-load.md
+++ b/docs/data-operate/import/import-scenes/jdbc-load.md
@@ -35,7 +35,7 @@ The INSERT statement is used in a similar way to the INSERT 
statement used in da
 * INSERT INTO table VALUES(...)
 ````
 
-Here we only introduce the second way. For a detailed description of the 
INSERT command, see the 
[INSERT](/sql-manual/sql-reference/Data-Manipulation-Statements/Manipulation/INSERT)
 command documentation.
+Here we only introduce the second way. For a detailed description of the 
INSERT command, see the 
[INSERT](/docs/sql-manual/sql-reference/Data-Manipulation-Statements/Manipulation/INSERT)
 command documentation.
 
 ## Single write
 
@@ -160,5 +160,5 @@ Please note the following:
 
    As mentioned earlier, we recommend that when using INSERT to import data, 
use the "batch" method to import, rather than a single insert.
 
-   At the same time, we can set a Label for each INSERT operation. Through the 
[Label mechanism](./load-atomicity.md), the idempotency and atomicity of 
operations can be guaranteed, and the data will not be lost or heavy in the 
end. For the specific usage of Label in INSERT, you can refer to the 
[INSERT](/sql-manual/sql-reference/Data-Manipulation-Statements/Manipulation/INSERT)
 document.
+   At the same time, we can set a Label for each INSERT operation. Through the 
[Label mechanism](./load-atomicity.md), the idempotency and atomicity of 
operations can be guaranteed, and the data will not be lost or heavy in the 
end. For the specific usage of Label in INSERT, you can refer to the 
[INSERT](/docs/sql-manual/sql-reference/Data-Manipulation-Statements/Manipulation/INSERT)
 document.
 
diff --git a/docs/data-table/basic-usage.md b/docs/data-table/basic-usage.md
index 3620b4e3c6a..00c303e3504 100644
--- a/docs/data-table/basic-usage.md
+++ b/docs/data-table/basic-usage.md
@@ -103,11 +103,11 @@ Initially, a database can be created through root or 
admin users:
 CREATE DATABASE example_db;
 ```
 
-> All commands can use `HELP` command to see detailed grammar help. For 
example: `HELP CREATE DATABASE;'`.You can also refer to the official website 
[SHOW CREATE 
DATABASE](/sql-manual/sql-reference/Show-Statements/SHOW-CREATE-DATABASE.md) 
command manual.
+> All commands can use `HELP` command to see detailed grammar help. For 
example: `HELP CREATE DATABASE;'`.You can also refer to the official website 
[SHOW CREATE 
DATABASE](/docs/sql-manual/sql-reference/Show-Statements/SHOW-CREATE-DATABASE) 
command manual.
 >
 > If you don't know the full name of the command, you can use "help command a 
 > field" for fuzzy query. If you type `HELP CREATE`, you can match commands 
 > like `CREATE DATABASE', `CREATE TABLE', `CREATE USER', etc.
 
-After the database is created, you can view the database information through 
[SHOW DATABASES](/sql-manual/sql-reference/Show-Statements/SHOW-DATABASES).
+After the database is created, you can view the database information through 
[SHOW DATABASES](/docs/sql-manual/sql-reference/Show-Statements/SHOW-DATABASES).
 
 ```sql
 MySQL> SHOW DATABASES;
@@ -656,7 +656,7 @@ mysql> select sum(table1.pv) from table1 join [shuffle] 
table2 where table1.site
 
 When deploying multiple FE nodes, you can deploy a load balancing layer on top 
of multiple FEs to achieve high availability of Doris.
 
-Please refer to [Load 
Balancing](../admin-manual/cluster-management/load-balancing)
+Please refer to [Load 
Balancing](../../admin-manual/cluster-management/load-balancing)
 
 ## Data update and deletion
 
diff --git a/docs/ecosystem/external-table/hive-bitmap-udf.md 
b/docs/ecosystem/external-table/hive-bitmap-udf.md
index 7f37c791f5b..ca7a4bf46c1 100644
--- a/docs/ecosystem/external-table/hive-bitmap-udf.md
+++ b/docs/ecosystem/external-table/hive-bitmap-udf.md
@@ -57,7 +57,7 @@ CREATE TABLE IF NOT EXISTS `hive_table`(
 
    Hive Bitmap UDF used in Hive/Spark,First, you need to compile fe to get 
hive-udf-jar-with-dependencies.jar.
    Compilation preparation:If you have compiled the ldb source code, you can 
directly compile fe,If you have compiled the ldb source code, you can compile 
it directly. If you have not compiled the ldb source code, you need to manually 
install thrift,
-   Reference:[Setting Up dev env for 
FE](https://doris.apache.org/community/developer-guide/fe-idea-dev).
+   Reference:[Setting Up dev env for 
FE](/community/developer-guide/fe-idea-dev).
 
 ```sql
 --clone doris code
@@ -106,4 +106,4 @@ select k1,bitmap_union(uuid) from hive_bitmap_table group 
by k1
 
 ## Hive Bitmap import into Doris
 
- see details: [Spark 
Load](../../data-operate/import/import-way/spark-load-manual) -> Basic 
operation -> Create load(Example 3: when the upstream data source is hive 
binary type table)
\ No newline at end of file
+ see details: [Spark 
Load](../../../data-operate/import/import-way/spark-load-manual) -> Basic 
operation -> Create load(Example 3: when the upstream data source is hive 
binary type table)
\ No newline at end of file
diff --git 
a/docs/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-PARTITION.md
 
b/docs/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-PARTITION.md
index 9e50a2dd015..bb577ad928f 100644
--- 
a/docs/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-PARTITION.md
+++ 
b/docs/sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-PARTITION.md
@@ -62,7 +62,7 @@ Notice:
 - The partition is left closed and right open. If the user only specifies the 
right boundary, the system will automatically determine the left boundary
 - If the bucketing method is not specified, the bucketing method and bucket 
number used for creating the table would be automatically used
 - If the bucketing method is specified, only the number of buckets can be 
modified, not the bucketing method or the bucketing column. If the bucketing 
method is specified but the number of buckets not be specified, the default 
value `10` will be used for bucket number instead of the number specified when 
the table is created. If the number of buckets modified, the bucketing method 
needs to be specified simultaneously.
-- The ["key"="value"] section can set some attributes of the partition, see 
[CREATE 
TABLE](/sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE)
+- The ["key"="value"] section can set some attributes of the partition, see 
[CREATE 
TABLE](/docs/sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE)
 - If the user does not explicitly create a partition when creating a table, 
adding a partition by ALTER is not supported
 
 2. Delete the partition
diff --git 
a/docs/sql-manual/sql-reference/Data-Manipulation-Statements/Load/STREAM-LOAD.md
 
b/docs/sql-manual/sql-reference/Data-Manipulation-Statements/Load/STREAM-LOAD.md
index c2d1d29bade..210c4706c6f 100644
--- 
a/docs/sql-manual/sql-reference/Data-Manipulation-Statements/Load/STREAM-LOAD.md
+++ 
b/docs/sql-manual/sql-reference/Data-Manipulation-Statements/Load/STREAM-LOAD.md
@@ -418,21 +418,21 @@ curl --location-trusted -u root -H "columns: 
k1,k2,source_sequence,v1,v2" -H "fu
 
 4. Label, import transaction, multi-table atomicity
 
-   All import tasks in Doris are atomic. And the import of multiple tables in 
the same import task can also guarantee atomicity. At the same time, Doris can 
also use the Label mechanism to ensure that the data imported is not lost or 
heavy. For details, see the [Import Transactions and 
Atomicity](../../../../../data-operate/import/import-scenes/load-atomicity.md) 
documentation.
+   All import tasks in Doris are atomic. And the import of multiple tables in 
the same import task can also guarantee atomicity. At the same time, Doris can 
also use the Label mechanism to ensure that the data imported is not lost or 
heavy. For details, see the [Import Transactions and 
Atomicity](../../../../../data-operate/import/import-scenes/load-atomicity) 
documentation.
 
 5. Column mapping, derived columns and filtering
 
-   Doris can support very rich column transformation and filtering operations 
in import statements. Most built-in functions and UDFs are supported. For how 
to use this function correctly, please refer to the [Column Mapping, Conversion 
and 
Filtering](../../../../../data-operate/import/import-scenes/load-data-convert,md)
 document.
+   Doris can support very rich column transformation and filtering operations 
in import statements. Most built-in functions and UDFs are supported. For how 
to use this function correctly, please refer to the [Column Mapping, Conversion 
and 
Filtering](../../../../../data-operate/import/import-scenes/load-data-convert) 
document.
 
 6. Error data filtering
 
    Doris import tasks can tolerate a portion of malformed data. The tolerance 
ratio is set via `max_filter_ratio`. The default is 0, which means that the 
entire import task will fail when there is an error data. If the user wants to 
ignore some problematic data rows, the secondary parameter can be set to a 
value between 0 and 1, and Doris will automatically skip the rows with 
incorrect data format.
 
-   For some calculation methods of the tolerance rate, please refer to the 
[Column Mapping, Conversion and 
Filtering](../../../../../data-operate/import/import-scenes/load-data-convert.md)
 document.
+   For some calculation methods of the tolerance rate, please refer to the 
[Column Mapping, Conversion and 
Filtering](../../../../../data-operate/import/import-scenes/load-data-convert) 
document.
 
 7. Strict Mode
 
-   The `strict_mode` attribute is used to set whether the import task runs in 
strict mode. The format affects the results of column mapping, transformation, 
and filtering. For a detailed description of strict mode, see the [strict 
mode](../../../../../data-operate/import/import-scenes/load-strict-moded.md) 
documentation.
+   The `strict_mode` attribute is used to set whether the import task runs in 
strict mode. The format affects the results of column mapping, transformation, 
and filtering. For a detailed description of strict mode, see the [strict 
mode](../../../../../data-operate/import/import-scenes/load-strict-moded) 
documentation.
 
 8. Timeout
 
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/admin-manual/cluster-management/elastic-expansion.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/admin-manual/cluster-management/elastic-expansion.md
index dbd35e6e156..1714d683549 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/admin-manual/cluster-management/elastic-expansion.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/admin-manual/cluster-management/elastic-expansion.md
@@ -134,7 +134,7 @@ DECOMMISSION 语句如下:
      >                 ```CANCEL DECOMMISSION BACKEND 
"be_host:be_heartbeat_service_port";```  
      >         命令取消。取消后,该 BE 上的数据将维持当前剩余的数据量。后续 Doris 重新进行负载均衡
 
-**对于多租户部署环境下,BE 节点的扩容和缩容,请参阅 [多租户设计文档](../multi-tenant)。**
+**对于多租户部署环境下,BE 节点的扩容和缩容,请参阅 [多租户设计文档](../../multi-tenant)。**
 
 ## Broker 扩容缩容
 
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/admin-manual/cluster-management/upgrade.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/admin-manual/cluster-management/upgrade.md
index befa55d6cfb..2fb5a18e5d0 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/admin-manual/cluster-management/upgrade.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/admin-manual/cluster-management/upgrade.md
@@ -111,7 +111,7 @@ Doris 可以通过滚动升级的方式,平滑进行升级。建议按照以
 **非法的回滚操作可能导致数据丢失和损坏。**
 
 ## 升级中涉及到内容的介绍文档
-1. [Doris 
元数据设计文档](../../../../docusaurus-plugin-content-docs-community/current/design/metadata-design)
 
+1. [Doris 元数据设计文档](/community/design/metadata-design) 
 2. [元数据运维](../../admin-manual/maint-monitor/metadata-operation.md) 
 3. [数据副本管理](../../admin-manual/maint-monitor/tablet-repair-and-balance.md)
 4. [安装部署文档](../../install/install-deploy.md)
\ No newline at end of file
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/admin-manual/http-actions/fe/table-schema-action.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/admin-manual/http-actions/fe/table-schema-action.md
index a4ae8f6e187..c04be9adb18 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/admin-manual/http-actions/fe/table-schema-action.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/admin-manual/http-actions/fe/table-schema-action.md
@@ -97,7 +97,7 @@ under the License.
        "count": 0
 }
 ```
-注意:区别为`http`方式比`http v2`方式多返回`aggregation_type`字段,`http 
v2`开启是通过`enable_http_server_v2`进行设置,具体参数说明详见[fe参数设置](../../config/fe-config.md)
+注意:区别为`http`方式比`http v2`方式多返回`aggregation_type`字段,`http 
v2`开启是通过`enable_http_server_v2`进行设置,具体参数说明详见[fe参数设置](../../../admin-manual/config/fe-config)
 
 ## Examples
 
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-operate/import/import-scenes/external-storage-load.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-operate/import/import-scenes/external-storage-load.md
index a0b28959e2f..7eac200d117 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-operate/import/import-scenes/external-storage-load.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-operate/import/import-scenes/external-storage-load.md
@@ -49,7 +49,7 @@ Hdfs load 创建导入语句,导入方式和[Broker Load](../../../data-operat
 
 1. 创建一张表
 
-   通过 `CREATE TABLE` 命令在`demo`创建一张表用于存储待导入的数据。具体的导入方式请查阅 [CREATE 
TABLE](../../../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE)
 命令手册。示例如下:
+   通过 `CREATE TABLE` 命令在`demo`创建一张表用于存储待导入的数据。具体的导入方式请查阅 [CREATE 
TABLE](../../../../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE)
 命令手册。示例如下:
 
    ```sql
    CREATE TABLE IF NOT EXISTS load_hdfs_file_test
@@ -82,11 +82,11 @@ Hdfs load 创建导入语句,导入方式和[Broker Load](../../../data-operat
        "max_filter_ratio"="0.1"
        );
    ```
-    关于参数介绍,请参阅[Broker 
Load](../../../data-operate/import/import-way/broker-load-manual),HA集群的创建语法,通过`HELP
 BROKER LOAD`查看
+    关于参数介绍,请参阅[Broker 
Load](../../../../data-operate/import/import-way/broker-load-manual),HA集群的创建语法,通过`HELP
 BROKER LOAD`查看
   
 3. 查看导入状态
    
-   Broker load 是一个异步的导入方式,具体导入结果可以通过[SHOW 
LOAD](../../../sql-manual/sql-reference/Show-Statements/SHOW-LOAD)命令查看
+   Broker load 是一个异步的导入方式,具体导入结果可以通过[SHOW 
LOAD](../../../../sql-manual/sql-reference/Show-Statements/SHOW-LOAD)命令查看
    
    ```
    mysql> show load order by createtime desc limit 1\G;
@@ -134,7 +134,7 @@ Hdfs load 创建导入语句,导入方式和[Broker Load](../../../data-operat
 其他云存储系统可以相应的文档找到与S3兼容的相关信息
 
 ### 开始导入
-导入方式和 [Broker 
Load](../../../data-operate/import/import-way/broker-load-manual) 基本相同,只需要将 
`WITH BROKER broker_name ()` 语句替换成如下部分
+导入方式和 [Broker 
Load](../../../../data-operate/import/import-way/broker-load-manual) 基本相同,只需要将 
`WITH BROKER broker_name ()` 语句替换成如下部分
 ```
     WITH S3
     (
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/ecosystem/external-table/hive-bitmap-udf.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/ecosystem/external-table/hive-bitmap-udf.md
index 3c5779bcd7c..24f0acfba38 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/ecosystem/external-table/hive-bitmap-udf.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/ecosystem/external-table/hive-bitmap-udf.md
@@ -59,7 +59,7 @@ CREATE TABLE IF NOT EXISTS `hive_table`(
 ### Hive Bitmap UDF 使用:
 
  Hive Bitmap UDF 需要在 Hive/Spark 
中使用,首先需要编译fe得到hive-udf-jar-with-dependencies.jar。
- 
编译准备工作:如果进行过ldb源码编译可直接编译fe,如果没有进行过ldb源码编译,则需要手动安装thrift,可参考:[FE开发环境搭建](https://doris.apache.org/zh-CN/community/developer-guide/fe-idea-dev)
 中的编译与安装
+ 
编译准备工作:如果进行过ldb源码编译可直接编译fe,如果没有进行过ldb源码编译,则需要手动安装thrift,可参考:[FE开发环境搭建](/community/developer-guide/fe-idea-dev)
 中的编译与安装
 
 ```sql
 --clone doris源码
@@ -115,4 +115,4 @@ select k1,bitmap_union(uuid) from hive_bitmap_table group 
by k1
 
 ## Hive bitmap 导入 doris
 
- 详见:  [Spark Load](../../data-operate/import/import-way/spark-load-manual) -> 
基本操作  -> 创建导入 (示例3:上游数据源是hive binary类型情况)
\ No newline at end of file
+ 详见:  [Spark Load](../../../data-operate/import/import-way/spark-load-manual) 
-> 基本操作  -> 创建导入 (示例3:上游数据源是hive binary类型情况)
\ No newline at end of file
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.15/administrator-guide/operation/metadata-operation.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.15/administrator-guide/operation/metadata-operation.md
index 995f10f7ab8..47ab997cd7f 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.15/administrator-guide/operation/metadata-operation.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-0.15/administrator-guide/operation/metadata-operation.md
@@ -28,11 +28,11 @@ under the License.
 
 本文档主要介绍在实际生产环境中,如何对 Doris 的元数据进行管理。包括 FE 节点建议的部署方式、一些常用的操作方法、以及常见错误的解决方法。
 
-在阅读本文当前,请先阅读 [Doris 元数据设计文档](../../internal/metadata-design.md) 了解 Doris 
元数据的工作原理。
+在阅读本文当前,请先阅读 [Doris 元数据设计文档](../../../internal/metadata-design) 了解 Doris 
元数据的工作原理。
 
 ## 重要提示
 
-* 当前元数据的设计是无法向后兼容的。即如果新版本有新增的元数据结构变动(可以查看 FE 代码中的 `FeMetaVersion.java` 
文件中是否有新增的 VERSION),那么在升级到新版本后,通常是无法再回滚到旧版本的。所以,在升级 FE 之前,请务必按照 
[升级文档](../../installing/upgrade.md) 中的操作,测试元数据兼容性。
+* 当前元数据的设计是无法向后兼容的。即如果新版本有新增的元数据结构变动(可以查看 FE 代码中的 `FeMetaVersion.java` 
文件中是否有新增的 VERSION),那么在升级到新版本后,通常是无法再回滚到旧版本的。所以,在升级 FE 之前,请务必按照 
[升级文档](../../../installing/upgrade) 中的操作,测试元数据兼容性。
 
 ## 元数据目录结构
 
diff --git 
a/versioned_docs/version-0.15/administrator-guide/alter-table/alter-table-rollup.md
 
b/versioned_docs/version-0.15/administrator-guide/alter-table/alter-table-rollup.md
index d7ef94e5d75..a273ab4907d 100644
--- 
a/versioned_docs/version-0.15/administrator-guide/alter-table/alter-table-rollup.md
+++ 
b/versioned_docs/version-0.15/administrator-guide/alter-table/alter-table-rollup.md
@@ -27,8 +27,8 @@ under the License.
 # Rollup
 
 Users can speed up queries by creating rollup tables. For the concept and 
usage of Rollup, please refer to [Data
- Model, ROLLUP and Prefix Index](../../getting-started/data-model-rollup) and 
- [Rollup and query](../../getting-started/hit-the-rollup).
+ Model, ROLLUP and Prefix Index](../../../getting-started/data-model-rollup) 
and 
+ [Rollup and query](../../../getting-started/hit-the-rollup).
 
 
 This document focuses on how to create a Rollup job, as well as some 
considerations and frequently asked questions about creating a Rollup.
diff --git 
a/versioned_docs/version-0.15/administrator-guide/operation/metadata-operation.md
 
b/versioned_docs/version-0.15/administrator-guide/operation/metadata-operation.md
index d1688d27157..3f4d71ea35e 100644
--- 
a/versioned_docs/version-0.15/administrator-guide/operation/metadata-operation.md
+++ 
b/versioned_docs/version-0.15/administrator-guide/operation/metadata-operation.md
@@ -28,11 +28,11 @@ under the License.
 
 This document focuses on how to manage Doris metadata in a real production 
environment. It includes the proposed deployment of FE nodes, some commonly 
used operational methods, and common error resolution methods.
 
-For the time being, read the [Doris metadata design 
document](../../internal/metadata-design) to understand how Doris metadata 
works.
+For the time being, read the [Doris metadata design 
document](../../../internal/metadata-design) to understand how Doris metadata 
works.
 
 ## Important tips
 
-* Current metadata design is not backward compatible. That is, if the new 
version has a new metadata structure change (you can see whether there is a new 
VERSION in the `FeMetaVersion.java` file in the FE code), it is usually 
impossible to roll back to the old version after upgrading to the new version. 
Therefore, before upgrading FE, be sure to test metadata compatibility 
according to the operations in the [Upgrade Document](../../installing/upgrade).
+* Current metadata design is not backward compatible. That is, if the new 
version has a new metadata structure change (you can see whether there is a new 
VERSION in the `FeMetaVersion.java` file in the FE code), it is usually 
impossible to roll back to the old version after upgrading to the new version. 
Therefore, before upgrading FE, be sure to test metadata compatibility 
according to the operations in the [Upgrade 
Document](../../../installing/upgrade).
 
 ## Metadata catalog structure
 
diff --git a/versioned_docs/version-0.15/extending-doris/logstash.md 
b/versioned_docs/version-0.15/extending-doris/logstash.md
index 0bcc143633c..b60a2bb2888 100644
--- a/versioned_docs/version-0.15/extending-doris/logstash.md
+++ b/versioned_docs/version-0.15/extending-doris/logstash.md
@@ -28,7 +28,7 @@ under the License.
 
 This plugin is used to output data to Doris for logstash, use the HTTP 
protocol to interact with the Doris FE Http interface, and import data through 
Doris's stream load.
 
-[Learn more about Doris Stream Load 
](../administrator-guide/load-data/stream-load-manual/)
+[Learn more about Doris Stream Load 
](../../administrator-guide/load-data/stream-load-manual/)
 
 [Learn more about Doris](http://doris.apache.org/master/zh-CN/)
 
diff --git 
a/versioned_docs/version-0.15/extending-doris/udf/user-defined-function.md 
b/versioned_docs/version-0.15/extending-doris/udf/user-defined-function.md
index 34d1c0d49f1..280f18f63f3 100644
--- a/versioned_docs/version-0.15/extending-doris/udf/user-defined-function.md
+++ b/versioned_docs/version-0.15/extending-doris/udf/user-defined-function.md
@@ -34,7 +34,7 @@ There are two types of analysis requirements that UDF can 
meet: UDF and UDAF. UD
 
 This document mainly describes how to write a custom UDF function and how to 
use it in Doris.
 
-If users use the UDF function and extend Doris' function analysis, and want to 
contribute their own UDF functions back to the Doris community for other users, 
please see the document [Contribute 
UDF](https://doris.apache.org/zh-CN/docs/0.15/extending-doris/udf/contribute-udf).
+If users use the UDF function and extend Doris' function analysis, and want to 
contribute their own UDF functions back to the Doris community for other users, 
please see the document [Contribute 
UDF](https://doris.apache.org/docs/0.15/extending-doris/udf/contribute-udf).
 
 ## Writing UDF functions
 


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org
For additional commands, e-mail: commits-h...@doris.apache.org

Reply via email to