This is an automated email from the ASF dual-hosted git repository.

kassiez pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/doris-website.git


The following commit(s) were added to refs/heads/master by this push:
     new c2bc50234ad [fix] Fix crondeploy by Java UDF Docs Format (#1418)
c2bc50234ad is described below

commit c2bc50234ad6e9b475da96c87201d756fd031cf4
Author: KassieZ <139741991+kass...@users.noreply.github.com>
AuthorDate: Thu Nov 28 12:11:23 2024 +0800

    [fix] Fix crondeploy by Java UDF Docs Format (#1418)
    
    ## Versions
    
    - [x] dev
    - [ ] 3.0
    - [ ] 2.1
    - [ ] 2.0
    
    ## Languages
    
    - [x] Chinese
    - [x] English
    
    ## Docs Checklist
    
    - [ ] Checked by AI
    - [ ] Test Cases Built
    
    ## Version Mgt
    https://github.com/apache/doris-website/pull/1377
---
 .../gettingStarted/demo-block/page-hero-2.tsx      |   8 +-
 .../cluster-deployment/standard-deployment.md      |   2 +-
 .../high-concurrent-point-query.md                 |   2 +-
 docs/query-data/udf/java-user-defined-function.md  | 300 +++++++++---------
 gettingStarted/demo-block/page-hero-2.tsx          |   8 +-
 .../cluster-deployment/standard-deployment.md      |   2 +-
 .../high-concurrent-point-query.md                 |   2 +-
 .../query-data/udf/java-user-defined-function.md   | 338 +++++++++++----------
 .../version-1.2/install/standard-deployment.md     |   2 +-
 .../high-concurrent-point-query.md                 |   2 +-
 .../cluster-deployment/standard-deployment.md      |   2 +-
 .../cluster-deployment/standard-deployment.md      |   2 +-
 .../high-concurrent-point-query.md                 |   2 +-
 .../cluster-deployment/standard-deployment.md      |   2 +-
 .../high-concurrent-point-query.md                 |   2 +-
 15 files changed, 342 insertions(+), 334 deletions(-)

diff --git a/common_docs_zh/gettingStarted/demo-block/page-hero-2.tsx 
b/common_docs_zh/gettingStarted/demo-block/page-hero-2.tsx
index 8c6d822dffa..80d1a3567b9 100644
--- a/common_docs_zh/gettingStarted/demo-block/page-hero-2.tsx
+++ b/common_docs_zh/gettingStarted/demo-block/page-hero-2.tsx
@@ -22,8 +22,8 @@ export default function PageHero2() {
                     </div>
                 </div> */}
                 <div className="home-page-hero-right">
-                    <a className="latest-button-CN" 
href={`/zh-CN/docs${currentVersion === '' ? '' : 
`/${currentVersion}`}/query/nereids/nereids-new`}>
-                        <div 
className="home-page-hero-button-label"><div>数据查询</div></div>
+                    <a className="latest-button-CN" 
href={`/zh-CN/docs${currentVersion === '' ? '' : 
`/${currentVersion}`}/query-acceleration/tuning/overview`}>
+                        <div 
className="home-page-hero-button-label"><div>查询加速</div></div>
                         <div className="latest-button-title">
                             {/* <div className="home-page-hero-button-icon">
                                 <svg width="24px" viewBox="0 0 24 24" 
xmlns="http://www.w3.org/2000/svg";>
@@ -31,9 +31,9 @@ export default function PageHero2() {
                                     <path fill="none" d="M0 0h24v24H0Z"></path>
                                 </svg>
                             </div> */}
-                            <div style={{ marginBottom: 10 }}>全新优化器</div>
+                            <div style={{ marginBottom: 10 }}>查询调优原理与实践</div>
                         </div>
-                        <div style={{ fontSize: 12, marginBottom: 20 
}}>现代架构的全新查询优化器,能够更高效处理当前 Doris 场景的查询请求,同时提供更好的扩展性。</div>
+                        <div style={{ fontSize: 12, marginBottom: 20 
}}>查询性能调优是一个系统工程 Doris 为用户提供了各个维度的工具,方便从不同层面进行性能问题的诊断、定位、分析与解决。</div>
                     </a>
                     <a className="latest-button-CN" 
href={`/zh-CN/docs${currentVersion === '' ? '' : 
`/${currentVersion}`}/table-design/index/inverted-index`}>
                         <div className="latest-button-title">
diff --git a/docs/install/cluster-deployment/standard-deployment.md 
b/docs/install/cluster-deployment/standard-deployment.md
index 7d79e06672b..6a7c4657051 100644
--- a/docs/install/cluster-deployment/standard-deployment.md
+++ b/docs/install/cluster-deployment/standard-deployment.md
@@ -221,7 +221,7 @@ Doris instances communicate directly over the network, 
requiring the following p
 | Instance | Port                   | Default Port | Communication Direction   
  | Description                                                  |
 | -------- | ---------------------- | ------------ 
|-----------------------------| 
------------------------------------------------------------ |
 | BE       | be_port                | 9060         | FE --> BE                 
  | thrift server port on BE, receiving requests from FE         |
-| BE       | webserver_port         | 8040         | BE <--> BE                
  | http server port on BE                                       |
+| BE       | webserver_port         | 8040         | BE <--> BE, Client <--> 
FE                  | http server port on BE                                    
   |
 | BE       | heartbeat_service_port | 9050         | FE --> BE                 
  | heartbeat service port (thrift) on BE, receiving heartbeats from FE |
 | BE       | brpc_port              | 8060         | FE <--> BE,BE <--> BE     
  | brpc port on BE, used for communication between BEs          |
 | FE       | http_port              | 8030         | FE <--> FE,Client <--> FE 
  | http server port on FE                                       |
diff --git a/docs/query-acceleration/high-concurrent-point-query.md 
b/docs/query-acceleration/high-concurrent-point-query.md
index f0e417ffdf1..2d626436f5b 100644
--- a/docs/query-acceleration/high-concurrent-point-query.md
+++ b/docs/query-acceleration/high-concurrent-point-query.md
@@ -82,7 +82,7 @@ PROPERTIES (
 5. Enabling rowstore may lead to space expansion and occupy more disk space. 
For scenarios where querying only specific columns is needed, starting from 
Doris 2.1, it is recommended to use `"row_store_columns"="key,v1,v2"` to 
specify certain columns for rowstore storage. Queries can then selectively 
access these columns, for example:
 
    ```sql
-   SELECT key, v1, v2 FROM tbl_point_query WHERE key = 1
+   SELECT `key`, v1, v2 FROM tbl_point_query WHERE key = 1
    ```
 
 ## Using `PreparedStatement`
diff --git a/docs/query-data/udf/java-user-defined-function.md 
b/docs/query-data/udf/java-user-defined-function.md
index 1887debd08e..2cafc39e60b 100644
--- a/docs/query-data/udf/java-user-defined-function.md
+++ b/docs/query-data/udf/java-user-defined-function.md
@@ -1,6 +1,6 @@
 ---
 {
-"title": "Java UDF、UDAF、UDTF",
+"title": "Java UDF, UDAF, UDTF",
 "language": "en"
 }
 ---
@@ -144,182 +144,186 @@ When writing a `UDAF` using Java, there are some 
functions that must be implemen
 
 1. Write the corresponding Java UDAF code and package it into a JAR file.
 
-    <details><summary> Example 1: SimpleDemo will implement a simple function 
similar to sum, where the input parameter is INT and the output parameter is 
INT.</summary> 
+<details>
+<summary> Example 1: SimpleDemo will implement a simple function similar to 
sum, where the input parameter is INT and the output parameter is 
INT.</summary> 
 
-    ```java
-    package org.apache.doris.udf;
-
-    import java.io.DataInputStream;
-    import java.io.DataOutputStream;
-    import java.io.IOException;
-    import java.util.logging.Logger;
-
-    public class SimpleDemo  {
+```java
+package org.apache.doris.udf;
 
-        Logger log = Logger.getLogger("SimpleDemo");
+import java.io.DataInputStream;
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.util.logging.Logger;
 
-        //Need an inner class to store data
-        /*required*/
-        public static class State {
-            /*some variables if you need */
-            public int sum = 0;
-        }
+public class SimpleDemo  {
 
-        /*required*/
-        public State create() {
-            /* here could do some init work if needed */
-            return new State();
-        }
+    Logger log = Logger.getLogger("SimpleDemo");
 
-        /*required*/
-        public void destroy(State state) {
-            /* here could do some destroy work if needed */
-        }
-
-        /*Not Required*/
-        public void reset(State state) {
-            /*if you want this udaf function can work with window function.*/
-            /*Must impl this, it will be reset to init state after calculate 
every window frame*/
-            state.sum = 0;
-        }
+    //Need an inner class to store data
+    /*required*/
+    public static class State {
+        /*some variables if you need */
+        public int sum = 0;
+    }
 
-        /*required*/
-        //first argument is State, then other types your input
-        public void add(State state, Integer val) throws Exception {
-            /* here doing update work when input data*/
-            if (val != null) {
-                state.sum += val;
-            }
-        }
+    /*required*/
+    public State create() {
+        /* here could do some init work if needed */
+        return new State();
+    }
 
-        /*required*/
-        public void serialize(State state, DataOutputStream out) throws 
Exception {
-            /* serialize some data into buffer */
-            out.writeInt(state.sum);
-        }
+    /*required*/
+    public void destroy(State state) {
+        /* here could do some destroy work if needed */
+    }
 
-        /*required*/
-        public void deserialize(State state, DataInputStream in) throws 
Exception {
-            /* deserialize get data from buffer before you put */
-            int val = in.readInt();
-            state.sum = val;
-        }
+    /*Not Required*/
+    public void reset(State state) {
+        /*if you want this udaf function can work with window function.*/
+        /*Must impl this, it will be reset to init state after calculate every 
window frame*/
+        state.sum = 0;
+    }
 
-        /*required*/
-        public void merge(State state, State rhs) throws Exception {
-            /* merge data from state */
-            state.sum += rhs.sum;
+    /*required*/
+    //first argument is State, then other types your input
+    public void add(State state, Integer val) throws Exception {
+        /* here doing update work when input data*/
+        if (val != null) {
+            state.sum += val;
         }
+    }
 
-        /*required*/
-        //return Type you defined
-        public Integer getValue(State state) throws Exception {
-            /* return finally result */
-            return state.sum;
-        }
+    /*required*/
+    public void serialize(State state, DataOutputStream out) throws Exception {
+        /* serialize some data into buffer */
+        out.writeInt(state.sum);
     }
 
-    ```
-    </details>
+    /*required*/
+    public void deserialize(State state, DataInputStream in) throws Exception {
+        /* deserialize get data from buffer before you put */
+        int val = in.readInt();
+        state.sum = val;
+    }
 
+    /*required*/
+    public void merge(State state, State rhs) throws Exception {
+        /* merge data from state */
+        state.sum += rhs.sum;
+    }
 
-    <details><summary> Example 2: MedianUDAF is a function that calculates the 
median. The input types are (DOUBLE, INT), and the output type is DOUBLE. 
</summary>
+    /*required*/
+    //return Type you defined
+    public Integer getValue(State state) throws Exception {
+        /* return finally result */
+        return state.sum;
+    }
+}
+```
 
-    ```java
-    package org.apache.doris.udf.demo;  
-    
-    import java.io.DataInputStream;  
-    import java.io.DataOutputStream;
-    import java.io.IOException;
-    import java.math.BigDecimal;  
-    import java.util.Arrays;  
-    import java.util.logging.Logger;  
-
-    /* UDAF to calculate the median */  
-    public class MedianUDAF {  
-        Logger log = Logger.getLogger("MedianUDAF");  
-
-        // State storage  
-        public static class State {  
-            // Precision of the return result  
-            int scale = 0;  
-            // Whether it is the first time to execute the add method for a 
certain aggregation condition under a certain tablet  
-            boolean isFirst = true;  
-            // Data storage  
-            public StringBuilder stringBuilder;  
-        }  
+</details>
 
-        // Initialize the state  
-        public State create() {  
-            State state = new State();  
-            // Pre-initialize based on the amount of data that needs to be 
aggregated under each aggregation condition of each tablet to increase 
performance  
-            state.stringBuilder = new StringBuilder(1000);  
-            return state;  
-        }  
 
-        // Process each data under respective aggregation conditions for each 
tablet  
-        public void add(State state, Double val, int scale) {  
-            if (val != null && state.isFirst) {  
-                
state.stringBuilder.append(scale).append(",").append(val).append(",");  
-                state.isFirst = false;  
-            } else if (val != null) {  
-                state.stringBuilder.append(val).append(",");  
-            }  
-        }  
+<details>
+<summary> Example 2: MedianUDAF is a function that calculates the median. The 
input types are (DOUBLE, INT), and the output type is DOUBLE. </summary>
 
-        // Data needs to be output for aggregation after processing  
-        public void serialize(State state, DataOutputStream out) throws 
IOException {  
-            // Currently, only DataOutputStream is provided. If serialization 
of objects is required, methods such as concatenating strings, converting to 
JSON, or serializing into byte arrays can be considered  
-            // If the State object needs to be serialized, it may be necessary 
to implement a serialization interface for the State inner class  
-            // Ultimately, everything needs to be transmitted via 
DataOutputStream  
-            out.writeUTF(state.stringBuilder.toString());  
+```java
+package org.apache.doris.udf.demo;  
+
+import java.io.DataInputStream;  
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.math.BigDecimal;  
+import java.util.Arrays;  
+import java.util.logging.Logger;  
+
+/* UDAF to calculate the median */  
+public class MedianUDAF {  
+    Logger log = Logger.getLogger("MedianUDAF");  
+
+    // State storage  
+    public static class State {  
+        // Precision of the return result  
+        int scale = 0;  
+        // Whether it is the first time to execute the add method for a 
certain aggregation condition under a certain tablet  
+        boolean isFirst = true;  
+        // Data storage  
+        public StringBuilder stringBuilder;  
+    }  
+
+    // Initialize the state  
+    public State create() {  
+        State state = new State();  
+        // Pre-initialize based on the amount of data that needs to be 
aggregated under each aggregation condition of each tablet to increase 
performance  
+        state.stringBuilder = new StringBuilder(1000);  
+        return state;  
+    }  
+
+    // Process each data under respective aggregation conditions for each 
tablet  
+    public void add(State state, Double val, int scale) {  
+        if (val != null && state.isFirst) {  
+            
state.stringBuilder.append(scale).append(",").append(val).append(",");  
+            state.isFirst = false;  
+        } else if (val != null) {  
+            state.stringBuilder.append(val).append(",");  
         }  
-
-        // Obtain the output data from the data processing execution unit  
-        public void deserialize(State state, DataInputStream in) throws 
IOException {  
-            String string = in.readUTF();  
-            state.scale = Integer.parseInt(String.valueOf(string.charAt(0)));  
-            StringBuilder stringBuilder = new 
StringBuilder(string.substring(2));  
-            state.stringBuilder = stringBuilder;   
+    }  
+
+    // Data needs to be output for aggregation after processing  
+    public void serialize(State state, DataOutputStream out) throws 
IOException {  
+        // Currently, only DataOutputStream is provided. If serialization of 
objects is required, methods such as concatenating strings, converting to JSON, 
or serializing into byte arrays can be considered  
+        // If the State object needs to be serialized, it may be necessary to 
implement a serialization interface for the State inner class  
+        // Ultimately, everything needs to be transmitted via DataOutputStream 
 
+        out.writeUTF(state.stringBuilder.toString());  
+    }  
+
+    // Obtain the output data from the data processing execution unit  
+    public void deserialize(State state, DataInputStream in) throws 
IOException {  
+        String string = in.readUTF();  
+        state.scale = Integer.parseInt(String.valueOf(string.charAt(0)));  
+        StringBuilder stringBuilder = new StringBuilder(string.substring(2));  
+        state.stringBuilder = stringBuilder;   
+    }  
+
+    // The aggregation execution unit merges the processing results of data 
under certain aggregation conditions for a given key. The state1 parameter is 
the initialized instance during the first merge of each key  
+    public void merge(State state1, State state2) {  
+        state1.scale = state2.scale;  
+        state1.stringBuilder.append(state2.stringBuilder.toString());  
+    }  
+
+    // Output the final result after merging the data for each key  
+    public Double getValue(State state) {  
+        String[] strings = state.stringBuilder.toString().split(",");  
+        double[] doubles = new double[strings.length];  
+        for (int i = 0; i < strings.length - 1; i++) {  
+            doubles[i] = Double.parseDouble(strings[i + 1]);  
         }  
 
-        // The aggregation execution unit merges the processing results of 
data under certain aggregation conditions for a given key. The state1 parameter 
is the initialized instance during the first merge of each key  
-        public void merge(State state1, State state2) {  
-            state1.scale = state2.scale;  
-            state1.stringBuilder.append(state2.stringBuilder.toString());  
+        Arrays.sort(doubles);  
+        double n = doubles.length;  
+        if (n == 0) {  
+            return 0.0;  
         }  
+        double index = (n - 1) / 2.0;  
 
-        // Output the final result after merging the data for each key  
-        public Double getValue(State state) {  
-            String[] strings = state.stringBuilder.toString().split(",");  
-            double[] doubles = new double[strings.length];  
-            for (int i = 0; i < strings.length - 1; i++) {  
-                doubles[i] = Double.parseDouble(strings[i + 1]);  
-            }  
+        int low = (int) Math.floor(index);  
+        int high = (int) Math.ceil(index);  
 
-            Arrays.sort(doubles);  
-            double n = doubles.length;  
-            if (n == 0) {  
-                return 0.0;  
-            }  
-            double index = (n - 1) / 2.0;  
+        double value = low == high ? (doubles[low] + doubles[high]) / 2 : 
doubles[high];  
 
-            int low = (int) Math.floor(index);  
-            int high = (int) Math.ceil(index);  
+        BigDecimal decimal = new BigDecimal(value);  
+        return decimal.setScale(state.scale, 
BigDecimal.ROUND_HALF_UP).doubleValue();  
+    }  
 
-            double value = low == high ? (doubles[low] + doubles[high]) / 2 : 
doubles[high];  
-
-            BigDecimal decimal = new BigDecimal(value);  
-            return decimal.setScale(state.scale, 
BigDecimal.ROUND_HALF_UP).doubleValue();  
-        }  
-
-        // Executed after each execution unit completes  
-        public void destroy(State state) {  
-        }  
-    }
-    ```
+    // Executed after each execution unit completes  
+    public void destroy(State state) {  
+    }  
+}
+```
+    
 </details>
 
+
 2. Register and create the Java-UDAF function in Doris. For more syntax 
details, please refer to [CREATE 
FUNCTION](../../sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-FUNCTION.md).
 
     ```sql
diff --git a/gettingStarted/demo-block/page-hero-2.tsx 
b/gettingStarted/demo-block/page-hero-2.tsx
index 54984612c72..06aa0eb3276 100644
--- a/gettingStarted/demo-block/page-hero-2.tsx
+++ b/gettingStarted/demo-block/page-hero-2.tsx
@@ -22,8 +22,8 @@ export default function PageHero1() {
                     </div>
                 </div> */}
                 <div className="home-page-hero-right">
-                    <a className="latest-button" href={`/docs${currentVersion 
=== '' ? '' : `/${currentVersion}`}/query/nereids/nereids-new`}>
-                        <div className="home-page-hero-button-label"><div>Data 
Query</div></div>
+                    <a className="latest-button" href={`/docs${currentVersion 
=== '' ? '' : `/${currentVersion}`}/query-acceleration/tuning/overview`}>
+                        <div 
className="home-page-hero-button-label"><div>Query Acceleration</div></div>
                         <div className="latest-button-title">
                             {/* <div className="home-page-hero-button-icon">
                                 <svg width="24px" viewBox="0 0 24 24" 
xmlns="http://www.w3.org/2000/svg";>
@@ -31,9 +31,9 @@ export default function PageHero1() {
                                     <path fill="none" d="M0 0h24v24H0Z"></path>
                                 </svg>
                             </div> */}
-                            <div style={{ marginBottom: 10 }}>Cost-Based 
Optimizer </div>
+                            <div style={{ marginBottom: 10 }}>Best Practice of 
Tuning </div>
                         </div>
-                        <div style={{ fontSize: 12, marginBottom: 20 }}>To 
build an open, high-performance, cost-effective and unified log storage and 
analysis platform.</div>
+                        <div style={{ fontSize: 12, marginBottom: 20 }}>Query 
performance tuning is systematic engineering that requires optimization of the 
database system from multiple levels and dimensions.</div>
                     </a>
                     <a className="latest-button" href={`/docs${currentVersion 
=== '' ? '' : `/${currentVersion}`}/table-design/index/inverted-index`}>
                         <div className="latest-button-title">
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/install/cluster-deployment/standard-deployment.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/install/cluster-deployment/standard-deployment.md
index 42d3549283c..ccb8104f56a 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/install/cluster-deployment/standard-deployment.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/install/cluster-deployment/standard-deployment.md
@@ -225,7 +225,7 @@ Doris 各个实例直接通过网络进行通讯,其正常运行需要网络
 | 实例名称 | 端口名称               | 默认端口 | 通信方向                       | 说明           
                                      |
 | -------- | ---------------------- | -------- |----------------------------| 
---------------------------------------------------- |
 | BE       | be_port                | 9060     | FE --> BE                  | 
BE 上 thrift server 的端口,用于接收来自 FE 的请求   |
-| BE       | webserver_port         | 8040     | BE <--> BE                 | 
BE 上的 http server 的端口                           |
+| BE       | webserver_port         | 8040     | BE <--> BE, Client <--> FE    
| BE 上的 http server 的端口                           |
 | BE       | heartbeat_service_port | 9050     | FE --> BE                  | 
BE 上心跳服务端口(thrift),用于接收来自 FE 的心跳  |
 | BE       | brpc_port              | 8060     | FE <--> BE,BE <--> BE      | 
BE 上的 brpc 端口,用于 BE 之间通讯                  |
 | FE       | http_port              | 8030     | FE <--> FE,Client <--> FE   | 
FE 上的 http server 端口                             |
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/query-acceleration/high-concurrent-point-query.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/query-acceleration/high-concurrent-point-query.md
index 8299a631b27..49e8ffa7db3 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/query-acceleration/high-concurrent-point-query.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/query-acceleration/high-concurrent-point-query.md
@@ -80,7 +80,7 @@ PROPERTIES (
 5. 开启行存会导致空间膨胀,占用更多的磁盘空间,如果只需要查询部分列,在 Doris 2.1 
后建议使用`"row_store_columns"="key,v1,v2"` 类似的方式指定部份列作为行存,查询的时候只查询这部份列,例如
 
     ```sql
-    SELECT key, v1, v2 FROM tbl_point_query WHERE key = 1
+    SELECT `key`, v1, v2 FROM tbl_point_query WHERE key = 1
     ```
 
 ## 使用 PreparedStatement
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/query-data/udf/java-user-defined-function.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/query-data/udf/java-user-defined-function.md
index f1ff590dcdb..803e8a399d4 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/query-data/udf/java-user-defined-function.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/query-data/udf/java-user-defined-function.md
@@ -1,6 +1,6 @@
 ---
 {
-"title": "Java UDF、UDAF、UDTF",
+"title": "Java UDF, UDAF, UDTF",
 "language": "zh-CN"
 }
 ---
@@ -27,9 +27,9 @@ under the License.
 ## 概述
 Java UDF 为用户提供 UDF 编写的 Java 接口,以方便用户使用 Java 语言进行自定义函数的执行。
 Doris 支持使用 JAVA 编写 UDF、UDAF 和 UDTF。下文如无特殊说明,使用 UDF 统称所有用户自定义函数。
-1. Java UDF  是较为常见的自定义标量函数(Scalar 
Function),即每输入一行数据,就会有一行对应的结果输出,较为常见的有ABS,LENGTH等。值得一提的是对于用户来讲,Hive UDF 
是可以直接迁移至 Doris 的。
-2. Java UDAF 即为自定义的聚合函数(Aggregate 
Function),即在输入多行数据进行聚合后,仅输出一行对应的结果,较为常见的有MIN,MAX,COUNT等。
-3. JAVA UDTF 即为自定义的表函数(Table Function),即每输一行数据,可以产生一行或多行的结果,在 Doris 
中需要结合Lateral View 使用可以达到行转列的效果,较为常见的有EXPLODE,EXPLODE_SPLIT等。
+1. Java UDF  是较为常见的自定义标量函数 (Scalar Function),即每输入一行数据,就会有一行对应的结果输出,较为常见的有 
ABS,LENGTH 等。值得一提的是对于用户来讲,Hive UDF 是可以直接迁移至 Doris 的。
+2. Java UDAF 即为自定义的聚合函数 (Aggregate Function),即在输入多行数据进行聚合后,仅输出一行对应的结果,较为常见的有 
MIN,MAX,COUNT 等。
+3. JAVA UDTF 即为自定义的表函数 (Table Function),即每输一行数据,可以产生一行或多行的结果,在 Doris 中需要结合 
Lateral View 使用可以达到行转列的效果,较为常见的有 EXPLODE,EXPLODE_SPLIT 等。
 
 ## 类型对应关系
 
@@ -63,7 +63,7 @@ Doris 支持使用 JAVA 编写 UDF、UDAF 和 UDTF。下文如无特殊说明,
 ## 使用限制
 
 1. 不支持复杂数据类型(HLL,Bitmap)。
-2. 当前允许用户自己指定JVM最大堆大小,配置项是 be.conf 中的 `JAVA_OPTS` 的 -Xmx 部分。默认 
1024m,如果需要聚合数据,建议调大一些,增加性能,减少内存溢出风险。
+2. 当前允许用户自己指定 JVM 最大堆大小,配置项是 be.conf 中的 `JAVA_OPTS` 的 -Xmx 部分。默认 
1024m,如果需要聚合数据,建议调大一些,增加性能,减少内存溢出风险。
 3. 由于 jvm 加载同名类的问题,不要同时使用多个同名类作为 udf 实现,如果想更新某个同名类的 udf,需要重启 be 重新加载 classpath。
 
 
@@ -71,9 +71,9 @@ Doris 支持使用 JAVA 编写 UDF、UDAF 和 UDTF。下文如无特殊说明,
 本小节主要介绍如何开发一个 Java UDF。在 `samples/doris-demo/java-udf-demo/` 
下提供了示例,可供参考,查看点击[这里](https://github.com/apache/doris/tree/master/samples/doris-demo/java-udf-demo)
 
 UDF 的使用与普通的函数方式一致,唯一的区别在于,内置函数的作用域是全局的,而 UDF 的作用域是 DB 内部。
-所以如果当前链接 session 位于数据库DB 内部时,直接使用 UDF 名字会在当前 DB 内部查找对应的 UDF。否则用户需要显示的指定 UDF 
的数据库名字,例如 `dbName.funcName`。
+所以如果当前链接 session 位于数据库 DB 内部时,直接使用 UDF 名字会在当前 DB 内部查找对应的 UDF。否则用户需要显示的指定 UDF 
的数据库名字,例如 `dbName.funcName`。
 
-接下来的章节介绍实例,均会在`test_table` 上做测试,对应建表如下:
+接下来的章节介绍实例,均会在`test_table` 上做测试,对应建表如下:
 
 ```sql
 CREATE TABLE `test_table` (
@@ -94,7 +94,7 @@ insert into test_table values (6, 666.66, "d,e");
 
 使用 Java 代码编写 UDF,UDF 的主入口必须为 `evaluate` 函数。这一点与 Hive 等其他引擎保持一致。在本示例中,我们编写了 
`AddOne` UDF 来完成对整型输入进行加一的操作。
 
-1. 首先编写对应的Java 代码,打包生成JAR 包。
+1. 首先编写对应的 Java 代码,打包生成 JAR 包。
 
     ```java
     public class AddOne extends UDF {
@@ -104,7 +104,7 @@ insert into test_table values (6, 666.66, "d,e");
     }
     ```
 
-2. 在Doris 中注册创建 Java-UDF 函数。 更多语法帮助可参阅 [CREATE 
FUNCTION](../../sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-FUNCTION.md).
+2. 在 Doris 中注册创建 Java-UDF 函数。更多语法帮助可参阅 [CREATE 
FUNCTION](../../sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-FUNCTION.md).
 
     ```sql
     CREATE FUNCTION java_udf_add_one(int) RETURNS int PROPERTIES (
@@ -116,7 +116,7 @@ insert into test_table values (6, 666.66, "d,e");
     ```
 
 3. 用户使用 UDF 必须拥有对应数据库的 `SELECT` 权限。
-    如果想查看注册成功的对应UDF 函数,可以使用[SHOW 
FUNCTIONS](../../sql-manual/sql-statements/Show-Statements/SHOW-FUNCTIONS.md) 
命令。
+    如果想查看注册成功的对应 UDF 函数,可以使用[SHOW 
FUNCTIONS](../../sql-manual/sql-statements/Show-Statements/SHOW-FUNCTIONS.md) 
命令。
 
     ``` sql
     select id,java_udf_add_one(id) from test_table;
@@ -136,182 +136,186 @@ insert into test_table values (6, 666.66, "d,e");
 
 在使用 Java 代码编写 UDAF 时,有一些必须实现的函数 (标记 required) 和一个内部类 State,下面将以具体的实例来说明。
 
-1. 首先编写对应的Java UDAF 代码,打包生成JAR 包。
+1. 首先编写对应的 Java UDAF 代码,打包生成 JAR 包。
 
-    <details><summary> 示例1: SimpleDemo 将实现一个类似的 sum 的简单函数,输入参数 INT,输出参数是 
INT</summary> 
+<details>
+<summary> 示例 1: SimpleDemo 将实现一个类似的 sum 的简单函数,输入参数 INT,输出参数是 INT</summary> 
 
-    ```java
-    package org.apache.doris.udf;
+```java
+package org.apache.doris.udf;
 
-    import java.io.DataInputStream;
-    import java.io.DataOutputStream;
-    import java.io.IOException;
-    import java.util.logging.Logger;
+import java.io.DataInputStream;
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.util.logging.Logger;
 
-    public class SimpleDemo  {
+public class SimpleDemo  {
 
-        Logger log = Logger.getLogger("SimpleDemo");
+Logger log = Logger.getLogger("SimpleDemo");
 
-        //Need an inner class to store data
-        /*required*/
-        public static class State {
-            /*some variables if you need */
-            public int sum = 0;
-        }
+//Need an inner class to store data
+/*required*/
+public static class State {
+    /*some variables if you need */
+    public int sum = 0;
+}
 
-        /*required*/
-        public State create() {
-            /* here could do some init work if needed */
-            return new State();
-        }
+/*required*/
+public State create() {
+    /* here could do some init work if needed */
+    return new State();
+}
 
-        /*required*/
-        public void destroy(State state) {
-            /* here could do some destroy work if needed */
-        }
+/*required*/
+public void destroy(State state) {
+    /* here could do some destroy work if needed */
+}
 
-        /*Not Required*/
-        public void reset(State state) {
-            /*if you want this udaf function can work with window function.*/
-            /*Must impl this, it will be reset to init state after calculate 
every window frame*/
-            state.sum = 0;
-        }
+/*Not Required*/
+public void reset(State state) {
+    /*if you want this udaf function can work with window function.*/
+    /*Must impl this, it will be reset to init state after calculate every 
window frame*/
+    state.sum = 0;
+}
 
-        /*required*/
-        //first argument is State, then other types your input
-        public void add(State state, Integer val) throws Exception {
-            /* here doing update work when input data*/
-            if (val != null) {
-                state.sum += val;
-            }
-        }
+/*required*/
+//first argument is State, then other types your input
+public void add(State state, Integer val) throws Exception {
+    /* here doing update work when input data*/
+    if (val != null) {
+        state.sum += val;
+    }
+}
 
-        /*required*/
-        public void serialize(State state, DataOutputStream out) throws 
IOException {
-            /* serialize some data into buffer */
-            out.writeInt(state.sum);
-        }
+/*required*/
+public void serialize(State state, DataOutputStream out) throws IOException {
+    /* serialize some data into buffer */
+    out.writeInt(state.sum);
+}
 
-        /*required*/
-        public void deserialize(State state, DataInputStream in) throws 
IOException {
-            /* deserialize get data from buffer before you put */
-            int val = in.readInt();
-            state.sum = val;
-        }
+/*required*/
+public void deserialize(State state, DataInputStream in) throws IOException {
+    /* deserialize get data from buffer before you put */
+    int val = in.readInt();
+    state.sum = val;
+}
 
-        /*required*/
-        public void merge(State state, State rhs) throws Exception {
-            /* merge data from state */
-            state.sum += rhs.sum;
-        }
+/*required*/
+public void merge(State state, State rhs) throws Exception {
+    /* merge data from state */
+    state.sum += rhs.sum;
+}
 
-        /*required*/
-        //return Type you defined
-        public Integer getValue(State state) throws Exception {
-            /* return finally result */
-            return state.sum;
-        }
-    }
+/*required*/
+//return Type you defined
+public Integer getValue(State state) throws Exception {
+    /* return finally result */
+    return state.sum;
+}
+}
 
-    ```
-    </details>
+```
 
+</details>
 
-    <details><summary> 示例2: MedianUDAF是一个计算中位数的功能,输入类型为(DOUBLE, INT), 
输出为DOUBLE </summary>
 
-    ```java
-    package org.apache.doris.udf.demo;
-
-    import java.io.DataInputStream;
-    import java.io.DataOutputStream;
-    import java.math.BigDecimal;
-    import java.util.Arrays;
-    import java.util.logging.Logger;
-
-    /*UDAF 计算中位数*/
-    public class MedianUDAF {
-        Logger log = Logger.getLogger("MedianUDAF");
-
-        //状态存储
-        public static class State {
-            //返回结果的精度
-            int scale = 0;
-            //是否是某一个 tablet 下的某个聚合条件下的数据第一次执行 add 方法
-            boolean isFirst = true;
-            //数据存储
-            public StringBuilder stringBuilder;
-        }
+<details>
+<summary> 示例 2: MedianUDAF 是一个计算中位数的功能,输入类型为 (DOUBLE, INT), 输出为 DOUBLE 
</summary>
 
-        //状态初始化
-        public State create() {
-            State state = new State();
-            //根据每个 tablet 下的聚合条件需要聚合的数据量大小,预先初始化,增加性能
-            state.stringBuilder = new StringBuilder(1000);
-            return state;
-        }
+```java
+package org.apache.doris.udf.demo;
+
+import java.io.DataInputStream;
+import java.io.DataOutputStream;
+import java.math.BigDecimal;
+import java.util.Arrays;
+import java.util.logging.Logger;
+
+/*UDAF 计算中位数*/
+public class MedianUDAF {
+Logger log = Logger.getLogger("MedianUDAF");
+
+//状态存储
+public static class State {
+    //返回结果的精度
+    int scale = 0;
+    //是否是某一个 tablet 下的某个聚合条件下的数据第一次执行 add 方法
+    boolean isFirst = true;
+    //数据存储
+    public StringBuilder stringBuilder;
+}
 
+//状态初始化
+public State create() {
+    State state = new State();
+    //根据每个 tablet 下的聚合条件需要聚合的数据量大小,预先初始化,增加性能
+    state.stringBuilder = new StringBuilder(1000);
+    return state;
+}
 
-        //处理执行单位处理各自 tablet 下的各自聚合条件下的每个数据
-        public void add(State state, Double val, int scale) throws IOException 
{
-            if (val != null && state.isFirst) {
-                
state.stringBuilder.append(scale).append(",").append(val).append(",");
-                state.isFirst = false;
-            } else if (val != null) {
-                state.stringBuilder.append(val).append(",");
-            }
-        }
 
-        //处理数据完需要输出等待聚合
-        public void serialize(State state, DataOutputStream out) throws 
IOException {
-            //目前暂时只提供 DataOutputStream,如果需要序列化对象可以考虑拼接字符串,转换 json,序列化成字节数组等方式
-            //如果要序列化 State 对象,可能需要自己将 State 内部类实现序列化接口
-            //最终都是要通过 DataOutputStream 传输
-            out.writeUTF(state.stringBuilder.toString());
-        }
+//处理执行单位处理各自 tablet 下的各自聚合条件下的每个数据
+public void add(State state, Double val, int scale) throws IOException {
+    if (val != null && state.isFirst) {
+        state.stringBuilder.append(scale).append(",").append(val).append(",");
+        state.isFirst = false;
+    } else if (val != null) {
+        state.stringBuilder.append(val).append(",");
+    }
+}
 
-        //获取处理数据执行单位输出的数据
-        public void deserialize(State state, DataInputStream in) throws 
IOException {
-            String string = in.readUTF();
-            state.scale = Integer.parseInt(String.valueOf(string.charAt(0)));
-            StringBuilder stringBuilder = new 
StringBuilder(string.substring(2));
-            state.stringBuilder = stringBuilder;
-        }
+//处理数据完需要输出等待聚合
+public void serialize(State state, DataOutputStream out) throws IOException {
+    //目前暂时只提供 DataOutputStream,如果需要序列化对象可以考虑拼接字符串,转换 json,序列化成字节数组等方式
+    //如果要序列化 State 对象,可能需要自己将 State 内部类实现序列化接口
+    //最终都是要通过 DataOutputStream 传输
+    out.writeUTF(state.stringBuilder.toString());
+}
 
-        //聚合执行单位按照聚合条件合并某一个键下数据的处理结果 ,每个键第一次合并时,state1 参数是初始化的实例
-        public void merge(State state1, State state2) throws IOException {
-            state1.scale = state2.scale;
-            state1.stringBuilder.append(state2.stringBuilder.toString());
-        }
+//获取处理数据执行单位输出的数据
+public void deserialize(State state, DataInputStream in) throws IOException {
+    String string = in.readUTF();
+    state.scale = Integer.parseInt(String.valueOf(string.charAt(0)));
+    StringBuilder stringBuilder = new StringBuilder(string.substring(2));
+    state.stringBuilder = stringBuilder;
+}
 
-        //对每个键合并后的数据进行并输出最终结果
-        public Double getValue(State state) throws IOException {
-            String[] strings = state.stringBuilder.toString().split(",");
-            double[] doubles = new double[strings.length + 1];
-            doubles = 
Arrays.stream(strings).mapToDouble(Double::parseDouble).toArray();
+//聚合执行单位按照聚合条件合并某一个键下数据的处理结果 ,每个键第一次合并时,state1 参数是初始化的实例
+public void merge(State state1, State state2) throws IOException {
+    state1.scale = state2.scale;
+    state1.stringBuilder.append(state2.stringBuilder.toString());
+}
 
-            Arrays.sort(doubles);
-            double n = doubles.length - 1;
-            double index = n * 0.5;
+//对每个键合并后的数据进行并输出最终结果
+public Double getValue(State state) throws IOException {
+    String[] strings = state.stringBuilder.toString().split(",");
+    double[] doubles = new double[strings.length + 1];
+    doubles = 
Arrays.stream(strings).mapToDouble(Double::parseDouble).toArray();
 
-            int low = (int) Math.floor(index);
-            int high = (int) Math.ceil(index);
+    Arrays.sort(doubles);
+    double n = doubles.length - 1;
+    double index = n * 0.5;
 
-            double value = low == high ? (doubles[low] + doubles[high]) * 0.5 
: doubles[high];
+    int low = (int) Math.floor(index);
+    int high = (int) Math.ceil(index);
 
-            BigDecimal decimal = new BigDecimal(value);
-            return decimal.setScale(state.scale, 
BigDecimal.ROUND_HALF_UP).doubleValue();
-        }
+    double value = low == high ? (doubles[low] + doubles[high]) * 0.5 : 
doubles[high];
 
-        //每个执行单位执行完都会执行
-        public void destroy(State state) {
-        }
+    BigDecimal decimal = new BigDecimal(value);
+    return decimal.setScale(state.scale, 
BigDecimal.ROUND_HALF_UP).doubleValue();
+}
+
+//每个执行单位执行完都会执行
+public void destroy(State state) {
+}
+
+}
+```
 
-    }
-    ```
 </details>
 
 
-2. 在Doris 中注册创建 Java-UADF 函数。 更多语法帮助可参阅 [CREATE 
FUNCTION](../../sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-FUNCTION.md).
+2. 在 Doris 中注册创建 Java-UADF 函数。更多语法帮助可参阅 [CREATE 
FUNCTION](../../sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-FUNCTION.md).
 
     ```sql
     CREATE AGGREGATE FUNCTION simple_demo(INT) RETURNS INT PROPERTIES (
@@ -322,7 +326,7 @@ insert into test_table values (6, 666.66, "d,e");
     );
     ```
 
-3. 使用 Java-UDAF, 可以分组聚合或者聚合全部结果:
+3. 使用 Java-UDAF, 可以分组聚合或者聚合全部结果:
 
     ```sql
     select simple_demo(id) from test_table group by id;
@@ -348,8 +352,8 @@ insert into test_table values (6, 666.66, "d,e");
 UDTF 自 Doris 3.0 版本开始支持
 :::
 
-1. 首先编写对应的Java UDTF 代码,打包生成JAR 包。
-UDTF 和 UDF 函数一样,需要用户自主实现一个 `evaluate` 方法, 但是 UDTF 函数的返回值必须是 Array 类型。
+1. 首先编写对应的 Java UDTF 代码,打包生成 JAR 包。
+UDTF 和 UDF 函数一样,需要用户自主实现一个 `evaluate` 方法,但是 UDTF 函数的返回值必须是 Array 类型。
 
     ```JAVA
     public class UDTFStringTest {
@@ -363,7 +367,7 @@ UDTF 和 UDF 函数一样,需要用户自主实现一个 `evaluate` 方法,
     }
     ```
 
-2. 在Doris 中注册创建 Java-UDTF 函数。 此时会注册两个UTDF 函数,另外一个是在函数名后面加上`_outer`后缀, 
其中带后缀`_outer` 的是针对结果为0行时的特殊处理,具体可查看[OUTER 
组合器](../../sql-manual/sql-functions/table-functions/explode-numbers-outer.md)。 
+2. 在 Doris 中注册创建 Java-UDTF 函数。此时会注册两个 UTDF 
函数,另外一个是在函数名后面加上`_outer`后缀,其中带后缀`_outer` 的是针对结果为 0 行时的特殊处理,具体可查看[OUTER 
组合器](../../sql-manual/sql-functions/table-functions/explode-numbers-outer.md)。 
 更多语法帮助可参阅 [CREATE 
FUNCTION](../../sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-FUNCTION.md).
 
     ```sql
@@ -375,7 +379,7 @@ UDTF 和 UDF 函数一样,需要用户自主实现一个 `evaluate` 方法,
     );
     ```
 
-3. 使用 Java-UDTF, 在Doris 中使用UDTF 需要结合 [Lateral View](../lateral-view.md), 
实现行转列的效果 :
+3. 使用 Java-UDTF, 在 Doris 中使用 UDTF 需要结合 [Lateral View](../lateral-view.md), 
实现行转列的效果 :
 
     ```sql
     select id, str, e1 from test_table lateral view java_utdf(str,',') tmp as 
e1;
@@ -397,11 +401,11 @@ UDTF 和 UDF 函数一样,需要用户自主实现一个 `evaluate` 方法,
 当前在 Doris 中,执行一个 UDF 函数,例如 `select udf(col) from table`, 每一个并发 Instance 会加载一次 
udf.jar 包,在该 Instance 结束时卸载掉 udf.jar 包。
 
 所以当 udf.jar 文件中需要加载一个几百 MB 的文件时,会因为并发的原因,使得占据的内存急剧增大,容易 OOM。
-或者想使用一个连接池时,这样无法做到仅在static 区域初始化一次。
+或者想使用一个连接池时,这样无法做到仅在 static 区域初始化一次。
 
-这里提供两个解决方案,其中方案二需要Doris 版本在branch-3.0 以上才行。
+这里提供两个解决方案,其中方案二需要 Doris 版本在 branch-3.0 以上才行。
 
-*解决方案1:*
+*解决方案 1:*
 
 是可以将资源加载代码拆分开,单独生成一个 JAR 包文件,然后其他包直接引用该资源 JAR 包。 
 
@@ -453,9 +457,9 @@ public class FunctionUdf {
     jar  -cvf ./FunctionUdf.jar  ./FunctionUdf.class
     ```
 
-3. 由于想让资源 JAR 包被所有的并发引用,所以想让它被JVM 直接加载,可以将它放到指定路径 `be/custom_lib` 下面,BE 
服务重启之后就可以随着 JVM 的启动加载进来,因此都会随着服务启动而加载,停止而释放。
+3. 由于想让资源 JAR 包被所有的并发引用,所以想让它被 JVM 直接加载,可以将它放到指定路径 `be/custom_lib` 下面,BE 
服务重启之后就可以随着 JVM 的启动加载进来,因此都会随着服务启动而加载,停止而释放。
 
-4. 最后利用 `CREATE FUNCTION` 语句创建一个 UDF 函数, 这样每次卸载仅是FunctionUdf.jar。
+4. 最后利用 `CREATE FUNCTION` 语句创建一个 UDF 函数,这样每次卸载仅是 FunctionUdf.jar。
 
    ```sql
    CREATE FUNCTION java_udf_dict(string) RETURNS string PROPERTIES (
@@ -466,16 +470,16 @@ public class FunctionUdf {
    );
    ```
 
-*解决方案2:* 
+*解决方案 2:* 
 
-BE 全局缓存 JAR 包,自定义过期淘汰时间,在create function 时增加两个属性字段,其中
-static_load: 用于定义是否使用静态cache 加载的方式。
+BE 全局缓存 JAR 包,自定义过期淘汰时间,在 create function 时增加两个属性字段,其中
+static_load: 用于定义是否使用静态 cache 加载的方式。
 
 expiration_time: 用于定义 JAR 包的过期时间,单位为分钟。
 
-若使用静态cache 加载方式,则在第一次调用该UDF 函数时,在初始化之后会将该UDF 的实例缓存起来,在下次调用该UDF 时,首先会在cache 
中进行查找,如果没有找到,则会进行相关初始化操作。
+若使用静态 cache 加载方式,则在第一次调用该 UDF 函数时,在初始化之后会将该 UDF 的实例缓存起来,在下次调用该 UDF 时,首先会在 
cache 中进行查找,如果没有找到,则会进行相关初始化操作。
 
-并且后台有线程定期检查,如果在配置的过期淘汰时间内,一直没有被调用过,则会从缓存cache 中清理掉。如果被调用时,则会自动更新缓存时间点。
+并且后台有线程定期检查,如果在配置的过期淘汰时间内,一直没有被调用过,则会从缓存 cache 中清理掉。如果被调用时,则会自动更新缓存时间点。
 
 ```sql
 public class Print extends UDF {
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.2/install/standard-deployment.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.2/install/standard-deployment.md
index baf1fce552e..66686ad3973 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.2/install/standard-deployment.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.2/install/standard-deployment.md
@@ -113,7 +113,7 @@ Doris 各个实例直接通过网络进行通讯。以下表格展示了所有
 | 实例名称 | 端口名称 | 默认端口 | 通讯方向 | 说明 |
 |---|---|---|---| ---|
 | BE | be_port | 9060 | FE --> BE | BE 上 thrift server 的端口,用于接收来自 FE 的请求 |
-| BE | webserver_port | 8040 | BE <--> BE | BE 上的 http server 的端口 |
+| BE | webserver_port | 8040 | BE <--> BE, Client <--> FE | BE 上的 http server 
的端口 |
 | BE | heartbeat\_service_port | 9050 | FE --> BE | BE 上心跳服务端口(thrift),用于接收来自 
FE 的心跳 |
 | BE | brpc\_port | 8060 | FE <--> BE, BE <--> BE | BE 上的 brpc 端口,用于 BE 之间通讯 |
 | FE | http_port  | 8030 | FE <--> FE,用户 <--> FE |FE 上的 http server 端口 |
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/query-acceleration/high-concurrent-point-query.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/query-acceleration/high-concurrent-point-query.md
index 8299a631b27..49e8ffa7db3 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/query-acceleration/high-concurrent-point-query.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/query-acceleration/high-concurrent-point-query.md
@@ -80,7 +80,7 @@ PROPERTIES (
 5. 开启行存会导致空间膨胀,占用更多的磁盘空间,如果只需要查询部分列,在 Doris 2.1 
后建议使用`"row_store_columns"="key,v1,v2"` 类似的方式指定部份列作为行存,查询的时候只查询这部份列,例如
 
     ```sql
-    SELECT key, v1, v2 FROM tbl_point_query WHERE key = 1
+    SELECT `key`, v1, v2 FROM tbl_point_query WHERE key = 1
     ```
 
 ## 使用 PreparedStatement
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/install/cluster-deployment/standard-deployment.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/install/cluster-deployment/standard-deployment.md
index 42d3549283c..026e8f9853b 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/install/cluster-deployment/standard-deployment.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/install/cluster-deployment/standard-deployment.md
@@ -225,7 +225,7 @@ Doris 各个实例直接通过网络进行通讯,其正常运行需要网络
 | 实例名称 | 端口名称               | 默认端口 | 通信方向                       | 说明           
                                      |
 | -------- | ---------------------- | -------- |----------------------------| 
---------------------------------------------------- |
 | BE       | be_port                | 9060     | FE --> BE                  | 
BE 上 thrift server 的端口,用于接收来自 FE 的请求   |
-| BE       | webserver_port         | 8040     | BE <--> BE                 | 
BE 上的 http server 的端口                           |
+| BE       | webserver_port         | 8040     | BE <--> BE, Client <--> FE    
             | BE 上的 http server 的端口                           |
 | BE       | heartbeat_service_port | 9050     | FE --> BE                  | 
BE 上心跳服务端口(thrift),用于接收来自 FE 的心跳  |
 | BE       | brpc_port              | 8060     | FE <--> BE,BE <--> BE      | 
BE 上的 brpc 端口,用于 BE 之间通讯                  |
 | FE       | http_port              | 8030     | FE <--> FE,Client <--> FE   | 
FE 上的 http server 端口                             |
diff --git 
a/versioned_docs/version-2.1/install/cluster-deployment/standard-deployment.md 
b/versioned_docs/version-2.1/install/cluster-deployment/standard-deployment.md
index 1d3e7b005df..8486ebd30ef 100644
--- 
a/versioned_docs/version-2.1/install/cluster-deployment/standard-deployment.md
+++ 
b/versioned_docs/version-2.1/install/cluster-deployment/standard-deployment.md
@@ -211,7 +211,7 @@ Doris instances communicate directly over the network, 
requiring the following p
 | Instance | Port                   | Default Port | Communication Direction   
 | Description                                                  |
 | -------- | ---------------------- | ------------ | 
-------------------------- | 
------------------------------------------------------------ |
 | BE       | be_port                | 9060         | FE --> BE                 
 | thrift server port on BE, receiving requests from FE         |
-| BE       | webserver_port         | 8040         | BE <--> BE                
 | http server port on BE                                       |
+| BE       | webserver_port         | 8040         | BE <--> BE, Client <--> 
FE                | http server port on BE                                      
 |
 | BE       | heartbeat_service_port | 9050         | FE --> BE                 
 | heartbeat service port (thrift) on BE, receiving heartbeats from FE |
 | BE       | brpc_port              | 8060         | FE <--> BE,BE <--> BE     
  | brpc port on BE, used for communication between BEs          |
 | FE       | http_port              | 8030         | FE <--> FE,Client <--> FE 
  | http server port on FE                                       |
diff --git 
a/versioned_docs/version-2.1/query-acceleration/high-concurrent-point-query.md 
b/versioned_docs/version-2.1/query-acceleration/high-concurrent-point-query.md
index f0e417ffdf1..2d626436f5b 100644
--- 
a/versioned_docs/version-2.1/query-acceleration/high-concurrent-point-query.md
+++ 
b/versioned_docs/version-2.1/query-acceleration/high-concurrent-point-query.md
@@ -82,7 +82,7 @@ PROPERTIES (
 5. Enabling rowstore may lead to space expansion and occupy more disk space. 
For scenarios where querying only specific columns is needed, starting from 
Doris 2.1, it is recommended to use `"row_store_columns"="key,v1,v2"` to 
specify certain columns for rowstore storage. Queries can then selectively 
access these columns, for example:
 
    ```sql
-   SELECT key, v1, v2 FROM tbl_point_query WHERE key = 1
+   SELECT `key`, v1, v2 FROM tbl_point_query WHERE key = 1
    ```
 
 ## Using `PreparedStatement`
diff --git 
a/versioned_docs/version-3.0/install/cluster-deployment/standard-deployment.md 
b/versioned_docs/version-3.0/install/cluster-deployment/standard-deployment.md
index 92cb607dba5..9e816c0ed38 100644
--- 
a/versioned_docs/version-3.0/install/cluster-deployment/standard-deployment.md
+++ 
b/versioned_docs/version-3.0/install/cluster-deployment/standard-deployment.md
@@ -221,7 +221,7 @@ Doris instances communicate directly over the network, 
requiring the following p
 | Instance | Port                   | Default Port | Communication Direction   
  | Description                                                  |
 | -------- | ---------------------- | ------------ 
|-----------------------------| 
------------------------------------------------------------ |
 | BE       | be_port                | 9060         | FE --> BE                 
  | thrift server port on BE, receiving requests from FE         |
-| BE       | webserver_port         | 8040         | BE <--> BE                
  | http server port on BE                                       |
+| BE       | webserver_port         | 8040         | BE <--> BE, Client <--> 
FE                  | http server port on BE                                    
   |
 | BE       | heartbeat_service_port | 9050         | FE --> BE                 
  | heartbeat service port (thrift) on BE, receiving heartbeats from FE |
 | BE       | brpc_port              | 8060         | FE <--> BE,BE <--> BE     
  | brpc port on BE, used for communication between BEs          |
 | FE       | http_port              | 8030         | FE <--> FE,Client <--> FE 
  | http server port on FE                                       |
diff --git 
a/versioned_docs/version-3.0/query-acceleration/high-concurrent-point-query.md 
b/versioned_docs/version-3.0/query-acceleration/high-concurrent-point-query.md
index f0e417ffdf1..2d626436f5b 100644
--- 
a/versioned_docs/version-3.0/query-acceleration/high-concurrent-point-query.md
+++ 
b/versioned_docs/version-3.0/query-acceleration/high-concurrent-point-query.md
@@ -82,7 +82,7 @@ PROPERTIES (
 5. Enabling rowstore may lead to space expansion and occupy more disk space. 
For scenarios where querying only specific columns is needed, starting from 
Doris 2.1, it is recommended to use `"row_store_columns"="key,v1,v2"` to 
specify certain columns for rowstore storage. Queries can then selectively 
access these columns, for example:
 
    ```sql
-   SELECT key, v1, v2 FROM tbl_point_query WHERE key = 1
+   SELECT `key`, v1, v2 FROM tbl_point_query WHERE key = 1
    ```
 
 ## Using `PreparedStatement`


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org
For additional commands, e-mail: commits-h...@doris.apache.org

Reply via email to