This is an automated email from the ASF dual-hosted git repository.

yiguolei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/doris.git


The following commit(s) were added to refs/heads/master by this push:
     new 28ff349381c [doc](fix)invalid character 。 in en docs (#29355)
28ff349381c is described below

commit 28ff349381cec4a2ca4df930bc088e0e75a8babc
Author: Rohit Satardekar <rohitrs1...@yahoo.com>
AuthorDate: Wed Jan 3 10:29:59 2024 +0530

    [doc](fix)invalid character 。 in en docs (#29355)
    
    Co-authored-by: Rohit Satardekar <rohitrs1...@gmail.com>
---
 docs/en/community/developer-guide/be-clion-dev.md              |  4 ++--
 docs/en/community/developer-guide/be-vscode-dev.md             |  2 +-
 docs/en/community/developer-guide/benchmark-tool.md            |  8 ++++----
 docs/en/docs/admin-manual/cluster-management/load-balancing.md |  4 ++--
 .../docs/admin-manual/http-actions/be/tablet-distribution.md   |  2 +-
 .../en/docs/admin-manual/http-actions/fe/debug-point-action.md |  2 +-
 .../docs/admin-manual/http-actions/fe/query-profile-action.md  |  2 +-
 docs/en/docs/admin-manual/http-actions/fe/statistic-action.md  |  2 +-
 docs/en/docs/admin-manual/privilege-ldap/ldap.md               |  2 +-
 docs/en/docs/advanced/cold-hot-separation.md                   |  2 +-
 .../docs/data-operate/import/import-way/binlog-load-manual.md  |  2 +-
 .../docs/data-operate/import/import-way/spark-load-manual.md   |  2 +-
 docs/en/docs/ecosystem/flink-doris-connector.md                |  2 +-
 docs/en/docs/ecosystem/kyuubi.md                               |  2 +-
 .../ecosystem/udaf/remote-user-defined-aggregation-function.md |  6 +++---
 docs/en/docs/ecosystem/udf/remote-user-defined-function.md     |  8 ++++----
 docs/en/docs/get-starting/quick-start.md                       | 10 +++++-----
 docs/en/docs/install/source-install/compilation-mac.md         |  2 +-
 docs/en/docs/lakehouse/file.md                                 |  2 +-
 .../join-optimization/bucket-shuffle-join.md                   |  4 ++--
 docs/en/docs/query-acceleration/pipeline-execution-engine.md   |  2 +-
 docs/en/docs/releasenotes/release-1.2.2.md                     |  2 +-
 .../sql-manual/sql-functions/bitmap-functions/bitmap-count.md  |  2 +-
 .../Data-Definition-Statements/Create/CREATE-TABLE.md          |  4 ++--
 24 files changed, 40 insertions(+), 40 deletions(-)

diff --git a/docs/en/community/developer-guide/be-clion-dev.md 
b/docs/en/community/developer-guide/be-clion-dev.md
index d7de660e25d..6634f0d314d 100644
--- a/docs/en/community/developer-guide/be-clion-dev.md
+++ b/docs/en/community/developer-guide/be-clion-dev.md
@@ -25,7 +25,7 @@ under the License.
 
 ## Downloading and Compiling Code on Remote Server
 
-1. Download Doris source code on the remote server, such as in the directory 
`/mnt/datadisk0/chenqi/doris`。
+1. Download Doris source code on the remote server, such as in the directory 
`/mnt/datadisk0/chenqi/doris`.
 
 ```
 git clone https://github.com/apache/doris.git
@@ -34,7 +34,7 @@ git clone https://github.com/apache/doris.git
 2. Modify the `env.sh` file located in the root directory of the Doris code on 
the remote server.
 Add the configuration for `DORIS_HOME` at the beginning, for example, 
`DORIS_HOME=/mnt/datadisk0/chenqi/doris.`
 
-3. Execute commands to compile the code. The detailed compilation process 
[compilation-with-ldb-toolchain](https://doris.apache.org/zh-CN/docs/dev/install/source-install/compilation-with-ldb-toolchain)。
+3. Execute commands to compile the code. The detailed compilation process 
[compilation-with-ldb-toolchain](https://doris.apache.org/zh-CN/docs/dev/install/source-install/compilation-with-ldb-toolchain).
 
 ```
 cd /mnt/datadisk0/chenqi/doris
diff --git a/docs/en/community/developer-guide/be-vscode-dev.md 
b/docs/en/community/developer-guide/be-vscode-dev.md
index e7964e2a3f7..7cedc2a0257 100644
--- a/docs/en/community/developer-guide/be-vscode-dev.md
+++ b/docs/en/community/developer-guide/be-vscode-dev.md
@@ -309,7 +309,7 @@ lldb's attach mode is faster than gdb,and the usage is 
similar to gdb. we shou
 }
 ```
 
-It should be noted that this method requires the system `glibc` version to be 
`2.18+`. you can refer [Get VSCode CodeLLDB plugin work on CentOS 
7](https://gist.github.com/JaySon-Huang/63dcc6c011feb5bd6deb1ef0cf1a9b96) to 
make plugin work。
+It should be noted that this method requires the system `glibc` version to be 
`2.18+`. you can refer [Get VSCode CodeLLDB plugin work on CentOS 
7](https://gist.github.com/JaySon-Huang/63dcc6c011feb5bd6deb1ef0cf1a9b96) to 
make plugin work.
 
 ## Debugging core dump files
 
diff --git a/docs/en/community/developer-guide/benchmark-tool.md 
b/docs/en/community/developer-guide/benchmark-tool.md
index a6ed29e12a5..eb8277384a2 100644
--- a/docs/en/community/developer-guide/benchmark-tool.md
+++ b/docs/en/community/developer-guide/benchmark-tool.md
@@ -33,7 +33,7 @@ It can be used to test the performance of some parts of the 
BE storage layer (fo
 
 ## Compilation
 
-1. To ensure that the environment has been able to successfully compile the 
Doris ontology, you can refer to [Installation and 
deployment](/docs/install/source-install/compilation)。
+1. To ensure that the environment has been able to successfully compile the 
Doris ontology, you can refer to [Installation and 
deployment](/docs/install/source-install/compilation).
 
 2. Execute`run-be-ut.sh`
 
@@ -53,9 +53,9 @@ The data set is generated according to the following rules.
 >int: Random in [1,1000000]. 
 
 The data character set of string type is uppercase and lowercase English 
letters, and the length varies according to the type. 
-> char: Length random in [1,8]。
-> varchar: Length random in [1,128]。 
-> string: Length random in [1,100000]。
+> char: Length random in [1,8].
+> varchar: Length random in [1,128]. 
+> string: Length random in [1,100000].
 
 `rows_number` indicates the number of rows of data, the default value is 
`10000`. 
 
diff --git a/docs/en/docs/admin-manual/cluster-management/load-balancing.md 
b/docs/en/docs/admin-manual/cluster-management/load-balancing.md
index b655d57e0da..1933fa4f1a9 100644
--- a/docs/en/docs/admin-manual/cluster-management/load-balancing.md
+++ b/docs/en/docs/admin-manual/cluster-management/load-balancing.md
@@ -294,7 +294,7 @@ Query OK, 94 rows affected (0.079 sec)
  
 Verify the monitoring results: The indicators of the ProxySQL monitoring 
module are stored in the log table of the monitor library.
 The following is the monitoring of whether the connection is normal 
(monitoring of connect indicators):
-Note: There may be many connect_errors, this is because there is an error when 
the monitoring information is not configured. After the configuration, if the 
result of connect_error is NULL, it means normal。
+Note: There may be many connect_errors, this is because there is an error when 
the monitoring information is not configured. After the configuration, if the 
result of connect_error is NULL, it means normal.
 MySQL [(none)]> select * from mysql_server_connect_log;
 
+---------------+------+------------------+-------------------------+---------------+
 | hostname      | port | time_start_us    | connect_success_time_us | 
connect_error |
@@ -330,7 +330,7 @@ MySQL [(none)]> select * from mysql_server_read_only_log;
 Empty set (0.000 sec)
 
 All 3 nodes are in the group with hostgroup_id=10.
-Now, load the modification of the mysql_replication_hostgroups table just now 
to RUNTIME to take effect。
+Now, load the modification of the mysql_replication_hostgroups table just now 
to RUNTIME to take effect.
 MySQL [(none)]> load mysql servers to runtime;
 Query OK, 0 rows affected (0.003 sec)
  
diff --git a/docs/en/docs/admin-manual/http-actions/be/tablet-distribution.md 
b/docs/en/docs/admin-manual/http-actions/be/tablet-distribution.md
index 90cbe542444..d2f3c4ef896 100644
--- a/docs/en/docs/admin-manual/http-actions/be/tablet-distribution.md
+++ b/docs/en/docs/admin-manual/http-actions/be/tablet-distribution.md
@@ -40,7 +40,7 @@ Get the distribution of tablets under each partition between 
different disks on
     only supports `partition`
 
 * `partition_id`
-    ID of the specified partition,Optional with default all partition。
+    ID of the specified partition,Optional with default all partition.
 
 ## Request body
 
diff --git a/docs/en/docs/admin-manual/http-actions/fe/debug-point-action.md 
b/docs/en/docs/admin-manual/http-actions/fe/debug-point-action.md
index 84ad9bf324a..4bb59226b5c 100644
--- a/docs/en/docs/admin-manual/http-actions/fe/debug-point-action.md
+++ b/docs/en/docs/admin-manual/http-actions/fe/debug-point-action.md
@@ -239,7 +239,7 @@ None
 ### Examples
 
 
-Disable debug point `foo`。
+Disable debug point `foo`.
        
        
 ```
diff --git a/docs/en/docs/admin-manual/http-actions/fe/query-profile-action.md 
b/docs/en/docs/admin-manual/http-actions/fe/query-profile-action.md
index 095e88e2477..b670a0a5d00 100644
--- a/docs/en/docs/admin-manual/http-actions/fe/query-profile-action.md
+++ b/docs/en/docs/admin-manual/http-actions/fe/query-profile-action.md
@@ -100,7 +100,7 @@ Gets information about select queries for all fe nodes in 
the cluster.
 
 <version since="1.2">
 
-Admin 和 Root 用户可以查看所有 Query。普通用户仅能查看自己发送的 Query。
+Admin 和 Root 用户可以查看所有 Query.普通用户仅能查看自己发送的 Query.
 
 </version>
 
diff --git a/docs/en/docs/admin-manual/http-actions/fe/statistic-action.md 
b/docs/en/docs/admin-manual/http-actions/fe/statistic-action.md
index af80ed78005..3e1d2872f6f 100644
--- a/docs/en/docs/admin-manual/http-actions/fe/statistic-action.md
+++ b/docs/en/docs/admin-manual/http-actions/fe/statistic-action.md
@@ -32,7 +32,7 @@ under the License.
 
 ## Description
 
-获取集群统计信息、库表数量等。
+获取集群统计信息、库表数量等.
     
 ## Path parameters
 
diff --git a/docs/en/docs/admin-manual/privilege-ldap/ldap.md 
b/docs/en/docs/admin-manual/privilege-ldap/ldap.md
index 21c65427fea..30cd57744ac 100644
--- a/docs/en/docs/admin-manual/privilege-ldap/ldap.md
+++ b/docs/en/docs/admin-manual/privilege-ldap/ldap.md
@@ -90,7 +90,7 @@ You need to configure the LDAP basic information in the 
fe/conf/ldap.conf file,
   For example, if you use the LDAP user node uid attribute as the username to 
log into Doris, you can configure it as:    
   ldap_user_filter = (&(uid={login}));  
   This item can be configured using the LDAP user mailbox prefix as the user 
name:   
-  ldap_user_filter = (&(mail={login}@baidu.com))。
+  ldap_user_filter = (&(mail={login}@baidu.com)).
 
 * ldap_group_basedn = ou=group,dc=domain,dc=com
   base dn when Doris searches for group information in LDAP. if this item is 
not configured, LDAP group authorization will not be enabled. Same as ldap_ 
User_ Similar to basedn, it limits the scope of Doris searching for groups.
diff --git a/docs/en/docs/advanced/cold-hot-separation.md 
b/docs/en/docs/advanced/cold-hot-separation.md
index fadd0ed8c9c..179f0c43e72 100644
--- a/docs/en/docs/advanced/cold-hot-separation.md
+++ b/docs/en/docs/advanced/cold-hot-separation.md
@@ -32,7 +32,7 @@ A big usage scenario in the future is similar to the es log 
storage. In the log
 1. The price of ordinary cloud disks of cloud manufacturers is higher than 
that of object storage
 2. In the actual online use of the doris cluster, the utilization rate of 
ordinary cloud disks cannot reach 100%
 3. Cloud disk is not paid on demand, but object storage can be paid on demand
-4. High availability based on ordinary cloud disks requires multiple replicas, 
and a replica migration is required for a replica exception. This problem does 
not exist when data is placed on the object store, because the object store is 
shared。
+4. High availability based on ordinary cloud disks requires multiple replicas, 
and a replica migration is required for a replica exception. This problem does 
not exist when data is placed on the object store, because the object store is 
shared.
 
 ## Solution
 Set the freeze time on the partition level to indicate how long the partition 
will be frozen, and define the location of remote storage stored after the 
freeze. On the be, the daemon thread will periodically determine whether the 
table needs to be frozen. If it does, it will upload the data to s3.
diff --git a/docs/en/docs/data-operate/import/import-way/binlog-load-manual.md 
b/docs/en/docs/data-operate/import/import-way/binlog-load-manual.md
index 061bfff950c..717fed6fe34 100644
--- a/docs/en/docs/data-operate/import/import-way/binlog-load-manual.md
+++ b/docs/en/docs/data-operate/import/import-way/binlog-load-manual.md
@@ -522,7 +522,7 @@ The following configuration belongs to the system level 
configuration of SyncJob
        
 * `max_bytes_sync_commit`
 
-       The maximum size of the data when the transaction is committed. If the 
data size received by Fe is larger than it, it will immediately commit the 
transaction and send the accumulated data. The default value is 64MB. If you 
want to modify this configuration, please ensure that this value is greater 
than the product of `canal.instance.memory.buffer.size` and 
`canal.instance.memory.buffer.mmemunit` on the canal side (16MB by default) and 
`min_bytes_sync_commit`。
+       The maximum size of the data when the transaction is committed. If the 
data size received by Fe is larger than it, it will immediately commit the 
transaction and send the accumulated data. The default value is 64MB. If you 
want to modify this configuration, please ensure that this value is greater 
than the product of `canal.instance.memory.buffer.size` and 
`canal.instance.memory.buffer.mmemunit` on the canal side (16MB by default) and 
`min_bytes_sync_commit`.
        
 * `max_sync_task_threads_num`
 
diff --git a/docs/en/docs/data-operate/import/import-way/spark-load-manual.md 
b/docs/en/docs/data-operate/import/import-way/spark-load-manual.md
index c3c9a7dbb0f..312e70c686b 100644
--- a/docs/en/docs/data-operate/import/import-way/spark-load-manual.md
+++ b/docs/en/docs/data-operate/import/import-way/spark-load-manual.md
@@ -281,7 +281,7 @@ PROPERTIES
 
 If Spark load accesses Hadoop cluster resources with Kerberos authentication, 
we only need to specify the following parameters when creating Spark resources:
 
-- `spark.hadoop.hadoop.security.authentication` Specify the authentication 
method as Kerberos for Yarn。
+- `spark.hadoop.hadoop.security.authentication` Specify the authentication 
method as Kerberos for Yarn.
 - `spark.hadoop.yarn.resourcemanager.principal` Specify the principal of 
kerberos for Yarn.
 - `spark.hadoop.yarn.resourcemanager.keytab` Specify the path to the keytab 
file of kerberos for Yarn. The file must be an absolute path to a file on the 
server where the frontend process is located. And can be accessed by the 
frontend process.
 - `broker.hadoop.security.authentication`: Specify the authentication method 
as kerberos.
diff --git a/docs/en/docs/ecosystem/flink-doris-connector.md 
b/docs/en/docs/ecosystem/flink-doris-connector.md
index 672d96d5dea..737f303f67e 100644
--- a/docs/en/docs/ecosystem/flink-doris-connector.md
+++ b/docs/en/docs/ecosystem/flink-doris-connector.md
@@ -761,4 +761,4 @@ This problem is mainly caused by the conditional 
varchar/string type, which need
 
 15. **Failed to connect to backend: http://host:webserver_port, and Be is 
still alive**
 
-The issue may have occurred due to configuring the IP address of `be`, which 
is not reachable by the external Flink cluster.This is mainly because when 
connecting to `fe`, the address of `be` is resolved through fe. For instance, 
if you add a be address as '127.0.0.1', the be address obtained by the Flink 
cluster through fe will be '127.0.0.1:webserver_port', and Flink will connect 
to that address. When this issue arises, you can resolve it by adding the 
actual corresponding external IP  [...]
+The issue may have occurred due to configuring the IP address of `be`, which 
is not reachable by the external Flink cluster.This is mainly because when 
connecting to `fe`, the address of `be` is resolved through fe. For instance, 
if you add a be address as '127.0.0.1', the be address obtained by the Flink 
cluster through fe will be '127.0.0.1:webserver_port', and Flink will connect 
to that address. When this issue arises, you can resolve it by adding the 
actual corresponding external IP  [...]
diff --git a/docs/en/docs/ecosystem/kyuubi.md b/docs/en/docs/ecosystem/kyuubi.md
index 10a2e102b51..76ac1dd191c 100644
--- a/docs/en/docs/ecosystem/kyuubi.md
+++ b/docs/en/docs/ecosystem/kyuubi.md
@@ -44,7 +44,7 @@ unified authentication, engine lifecycle management, etc.
 
 Download Apache Kyuubi from <https://kyuubi.apache.org/zh/releases.html>
 
-Get Apache Kyuubi 1.6.0 or above and extract it to folder。
+Get Apache Kyuubi 1.6.0 or above and extract it to folder.
 
 ### Config Doris as Kyuubi data source
 
diff --git 
a/docs/en/docs/ecosystem/udaf/remote-user-defined-aggregation-function.md 
b/docs/en/docs/ecosystem/udaf/remote-user-defined-aggregation-function.md
index 88a0382e48c..ab38d397cfc 100644
--- a/docs/en/docs/ecosystem/udaf/remote-user-defined-aggregation-function.md
+++ b/docs/en/docs/ecosystem/udaf/remote-user-defined-aggregation-function.md
@@ -81,9 +81,9 @@ PROPERTIES (["key"="value"][,...])
 ```
 Instructions:
 
-1. PROPERTIES中`symbol`Represents the name of the method passed by the RPC 
call, which must be set。
-2. PROPERTIES中`object_file`Represents the RPC service address. Currently, a 
single address and a cluster address in BRPC-compatible format are supported. 
Refer to the cluster connection mode[Format 
specification](https://github.com/apache/incubator-brpc/blob/master/docs/cn/client.md#%E8%BF%9E%E6%8E%A5%E6%9C%8D%E5%8A%A1%E9%9B%86%E7%BE%A4)。
-3. PROPERTIES中`type`Indicates the UDAF call type, which is Native by default. 
Rpc is transmitted when Rpc UDAF is used。
+1. PROPERTIES中`symbol`Represents the name of the method passed by the RPC 
call, which must be set.
+2. PROPERTIES中`object_file`Represents the RPC service address. Currently, a 
single address and a cluster address in BRPC-compatible format are supported. 
Refer to the cluster connection mode[Format 
specification](https://github.com/apache/incubator-brpc/blob/master/docs/cn/client.md#%E8%BF%9E%E6%8E%A5%E6%9C%8D%E5%8A%A1%E9%9B%86%E7%BE%A4).
+3. PROPERTIES中`type`Indicates the UDAF call type, which is Native by default. 
Rpc is transmitted when Rpc UDAF is used.
 
 Sample:
 ```sql
diff --git a/docs/en/docs/ecosystem/udf/remote-user-defined-function.md 
b/docs/en/docs/ecosystem/udf/remote-user-defined-function.md
index 5ccd362728e..f73ef30f52d 100644
--- a/docs/en/docs/ecosystem/udf/remote-user-defined-function.md
+++ b/docs/en/docs/ecosystem/udf/remote-user-defined-function.md
@@ -81,10 +81,10 @@ PROPERTIES (["key"="value"][,...])
 ```
 Instructions:
 
-1. PROPERTIES中`symbol`Represents the name of the method passed by the RPC 
call, which must be set。
-2. PROPERTIES中`object_file`Represents the RPC service address. Currently, a 
single address and a cluster address in BRPC-compatible format are supported. 
Refer to the cluster connection mode[Format 
specification](https://github.com/apache/incubator-brpc/blob/master/docs/cn/client.md#%E8%BF%9E%E6%8E%A5%E6%9C%8D%E5%8A%A1%E9%9B%86%E7%BE%A4)。
-3. PROPERTIES中`type`Indicates the UDF call type, which is Native by default. 
Rpc is transmitted when Rpc UDF is used。
-4. name: A function belongs to a DB and name is of the 
form`dbName`.`funcName`. When `dbName` is not explicitly specified, the db of 
the current session is used`dbName`。
+1. PROPERTIES中`symbol`Represents the name of the method passed by the RPC 
call, which must be set.
+2. PROPERTIES中`object_file`Represents the RPC service address. Currently, a 
single address and a cluster address in BRPC-compatible format are supported. 
Refer to the cluster connection mode[Format 
specification](https://github.com/apache/incubator-brpc/blob/master/docs/cn/client.md#%E8%BF%9E%E6%8E%A5%E6%9C%8D%E5%8A%A1%E9%9B%86%E7%BE%A4).
+3. PROPERTIES中`type`Indicates the UDF call type, which is Native by default. 
Rpc is transmitted when Rpc UDF is used.
+4. name: A function belongs to a DB and name is of the 
form`dbName`.`funcName`. When `dbName` is not explicitly specified, the db of 
the current session is used`dbName`.
 
 Sample:
 ```sql
diff --git a/docs/en/docs/get-starting/quick-start.md 
b/docs/en/docs/get-starting/quick-start.md
index db145540cc7..1a9156f98ce 100644
--- a/docs/en/docs/get-starting/quick-start.md
+++ b/docs/en/docs/get-starting/quick-start.md
@@ -173,14 +173,14 @@ Next, connect to Doris through `mysql` client, mysql 
supports five SSL modes:
 
 3. `mysql --ssl-mode=REQUIRED -uroot -P9030 -h127.0.0.1`, force the use of SSL 
encrypted connections.
 
-4.`mysql --ssl-mode=VERIFY_CA --ssl-ca=ca.pem -uroot -P9030 -h127.0.0.1`, 
force the use of SSL encrypted connection and verify the validity of the 
server's identity by specifying the CA certificate。
+4.`mysql --ssl-mode=VERIFY_CA --ssl-ca=ca.pem -uroot -P9030 -h127.0.0.1`, 
force the use of SSL encrypted connection and verify the validity of the 
server's identity by specifying the CA certificate.
 
-5.`mysql --ssl-mode=VERIFY_CA --ssl-ca=ca.pem --ssl-cert=client-cert.pem 
--ssl-key=client-key.pem -uroot -P9030 -h127.0.0.1`, force the use of SSL 
encrypted connection, two-way ssl。
+5.`mysql --ssl-mode=VERIFY_CA --ssl-ca=ca.pem --ssl-cert=client-cert.pem 
--ssl-key=client-key.pem -uroot -P9030 -h127.0.0.1`, force the use of SSL 
encrypted connection, two-way ssl.
 
 >Note:
->`--ssl-mode` parameter is introduced by mysql5.7.11 version, please refer to 
[here](https://dev.mysql.com/doc/connector-j/8.0/en/connector-j-connp-props-security.html)
 for mysql client version lower than this version。
+>`--ssl-mode` parameter is introduced by mysql5.7.11 version, please refer to 
[here](https://dev.mysql.com/doc/connector-j/8.0/en/connector-j-connp-props-security.html)
 for mysql client version lower than this version.
 
-Doris needs a key certificate file to verify the SSL encrypted connection. The 
default key certificate file is located at 
`Doris/fe/mysql_ssl_default_certificate/`. For the generation of the key 
certificate file, please refer to [Key Certificate 
Configuration](../admin-manual/certificate.md)。
+Doris needs a key certificate file to verify the SSL encrypted connection. The 
default key certificate file is located at 
`Doris/fe/mysql_ssl_default_certificate/`. For the generation of the key 
certificate file, please refer to [Key Certificate 
Configuration](../admin-manual/certificate.md).
 
 #### Stop FE
 
@@ -347,7 +347,7 @@ Save the above data into `test.csv` file.
 
 4. Import data
 
-Here we import the data saved to the file above into the table we just created 
via Stream load。
+Here we import the data saved to the file above into the table we just created 
via Stream load.
 
 ```
 curl  --location-trusted -u root: -T test.csv -H "column_separator:," 
http://127.0.0.1:8030/api/demo/example_tbl/_stream_load
diff --git a/docs/en/docs/install/source-install/compilation-mac.md 
b/docs/en/docs/install/source-install/compilation-mac.md
index 527332ccf34..159d78684ef 100644
--- a/docs/en/docs/install/source-install/compilation-mac.md
+++ b/docs/en/docs/install/source-install/compilation-mac.md
@@ -77,7 +77,7 @@ The version of jdk installed using brew is 11, because on 
macOS, the arm64 versi
 
 ## Start-up
 
-1. Set `file descriptors` (_**NOTICE: If you have closed the current session, 
you need to set this variable again**_)。
+1. Set `file descriptors` (_**NOTICE: If you have closed the current session, 
you need to set this variable again**_).
     ```shell
     ulimit -n 65536
     ```
diff --git a/docs/en/docs/lakehouse/file.md b/docs/en/docs/lakehouse/file.md
index cc78338ee42..a5686616794 100644
--- a/docs/en/docs/lakehouse/file.md
+++ b/docs/en/docs/lakehouse/file.md
@@ -148,7 +148,7 @@ LIMIT 5;
 
 You can put the Table Value Function anywhere that you used to put Table in 
the SQL, such as in the WITH or FROM clause in CTE. In this way, you can treat 
the file as a normal table and conduct analysis conveniently.
 
-你也可以用过 `CREATE VIEW` 语句为 Table Value Function 创建一个逻辑视图。这样,你可以想其他视图一样,对这个 Table 
Value Function 进行访问、权限管理等操作,也可以让其他用户访问这个 Table Value Function。
+你也可以用过 `CREATE VIEW` 语句为 Table Value Function 创建一个逻辑视图.这样,你可以想其他视图一样,对这个 Table 
Value Function 进行访问、权限管理等操作,也可以让其他用户访问这个 Table Value Function.
 You can also create a logic view by using `CREATE VIEW` statement for a Table 
Value Function. So that you can query this view, grant priv on this view or 
allow other user to access this Table Value Function.
 
 ```
diff --git 
a/docs/en/docs/query-acceleration/join-optimization/bucket-shuffle-join.md 
b/docs/en/docs/query-acceleration/join-optimization/bucket-shuffle-join.md
index 4a1775d221b..35870d91351 100644
--- a/docs/en/docs/query-acceleration/join-optimization/bucket-shuffle-join.md
+++ b/docs/en/docs/query-acceleration/join-optimization/bucket-shuffle-join.md
@@ -28,7 +28,7 @@ under the License.
 
 Bucket Shuffle Join is a new function officially added in Doris 0.14. The 
purpose is to provide local optimization for some join queries to reduce the 
time-consuming of data transmission between nodes and speed up the query.
 
-It's design, implementation can be referred to [ISSUE 
4394](https://github.com/apache/incubator-doris/issues/4394)。
+It's design, implementation can be referred to [ISSUE 
4394](https://github.com/apache/incubator-doris/issues/4394).
 
 ## Noun Interpretation
 
@@ -91,7 +91,7 @@ You can use the `explain` command to check whether the join 
is a Bucket Shuffle
 |   |  equal join conjunct: `test`.`k1` = `baseall`.`k1`                       
                  
 ```
 
-The join type indicates that the join method to be used is:`BUCKET_SHUFFLE`。
+The join type indicates that the join method to be used is:`BUCKET_SHUFFLE`.
 
 ## Planning rules of Bucket Shuffle Join
 
diff --git a/docs/en/docs/query-acceleration/pipeline-execution-engine.md 
b/docs/en/docs/query-acceleration/pipeline-execution-engine.md
index c862e4a380f..33a9f7d85b5 100644
--- a/docs/en/docs/query-acceleration/pipeline-execution-engine.md
+++ b/docs/en/docs/query-acceleration/pipeline-execution-engine.md
@@ -32,7 +32,7 @@ under the License.
 
 Pipeline execution engine is an experimental feature added by Doris in version 
2.0. The goal is to replace the current execution engine of Doris's volcano 
model, fully release the computing power of multi-core CPUs, and limit the 
number of Doris's query threads to solve the problem of Doris's execution 
thread bloat.
 
-Its specific design, implementation and effects can be found in 
[DSIP-027]([DSIP-027: Support Pipeline Exec Engine - DORIS - Apache Software 
Foundation](https://cwiki.apache.org/confluence/display/DORIS/DSIP-027%3A+Support+Pipeline+Exec+Engine))。
+Its specific design, implementation and effects can be found in 
[DSIP-027]([DSIP-027: Support Pipeline Exec Engine - DORIS - Apache Software 
Foundation](https://cwiki.apache.org/confluence/display/DORIS/DSIP-027%3A+Support+Pipeline+Exec+Engine)).
 
 ## Principle
 
diff --git a/docs/en/docs/releasenotes/release-1.2.2.md 
b/docs/en/docs/releasenotes/release-1.2.2.md
index 53dae650d4d..08fd22571a0 100644
--- a/docs/en/docs/releasenotes/release-1.2.2.md
+++ b/docs/en/docs/releasenotes/release-1.2.2.md
@@ -76,7 +76,7 @@ Reference: 
[https://doris.apache.org/docs/dev/advanced/variables](https://doris.
 
 - Iceberg Catalog Support Hive Metastore and Rest Catalog type.
 
-- ES Catalog support _id column mapping。
+- ES Catalog support _id column mapping.
 
 - Optimize Iceberg V2 read performance with large number of delete rows.
 
diff --git 
a/docs/en/docs/sql-manual/sql-functions/bitmap-functions/bitmap-count.md 
b/docs/en/docs/sql-manual/sql-functions/bitmap-functions/bitmap-count.md
index e40be420112..fb90c995890 100644
--- a/docs/en/docs/sql-manual/sql-functions/bitmap-functions/bitmap-count.md
+++ b/docs/en/docs/sql-manual/sql-functions/bitmap-functions/bitmap-count.md
@@ -30,7 +30,7 @@ under the License.
 
 `BITMAP BITMAP_COUNT(BITMAP lhs)`
 
-Returns the number of input bitmaps。
+Returns the number of input bitmaps.
 
 ### example
 
diff --git 
a/docs/en/docs/sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE.md
 
b/docs/en/docs/sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE.md
index df5030462ee..d2d68aa292d 100644
--- 
a/docs/en/docs/sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE.md
+++ 
b/docs/en/docs/sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-TABLE.md
@@ -125,7 +125,7 @@ Column definition list:
 
         Default value of the column. If the load data does not specify a value 
for this column, the system will assign a default value to this column.
         
-        The syntax is: `default default_value`。
+        The syntax is: `default default_value`.
         
         Currently, the default value supports two forms:
 
@@ -260,7 +260,7 @@ Partition information supports three writing methods:
 </version>
 
 
-4. MULTI RANGE:Multi build integer RANGE partitions,Define the left closed and 
right open interval of the zone, and step size。
+4. MULTI RANGE:Multi build integer RANGE partitions,Define the left closed and 
right open interval of the zone, and step size.
 
     ```
     PARTITION BY RANGE(int_col)


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org
For additional commands, e-mail: commits-h...@doris.apache.org

Reply via email to