This is an automated email from the ASF dual-hosted git repository.

zykkk pushed a commit to branch branch-2.0
in repository https://gitbox.apache.org/repos/asf/doris.git


The following commit(s) were added to refs/heads/branch-2.0 by this push:
     new 54111934778 [docs](docs) Update Files of Branch-2.0 (#26737)
54111934778 is described below

commit 5411193477863d67f538cd631c4008c0fa2023c5
Author: KassieZ <[email protected]>
AuthorDate: Fri Nov 10 14:24:02 2023 +0800

    [docs](docs) Update Files of Branch-2.0 (#26737)
---
 .../maint-monitor/tablet-restore-tool.md           | 136 ++++++++++
 .../ecosystem/udf/native-user-defined-function.md  | 271 ++++++++++++++++++++
 .../lakehouse/multi-catalog/faq-multi-catalog.md   | 235 +++++++++++++++++
 .../docs/lakehouse/multi-catalog/multi-catalog.md  |   4 +-
 .../enable-feature.md                              |  38 +++
 docs/sidebars.json                                 |  16 +-
 .../maint-monitor/tablet-restore-tool.md           | 142 +++++++++++
 .../docs/ecosystem/native-user-defined-function.md | 278 +++++++++++++++++++++
 .../lakehouse/multi-catalog/faq-multi-catalog.md   | 238 ++++++++++++++++++
 .../window-functions/window-function-avg.md        |   2 +-
 .../window-functions/window-function-max.md        |   2 +-
 .../window-functions/window-function-ntile.md      |   2 +-
 .../enable-feature.md                              |  38 +++
 13 files changed, 1392 insertions(+), 10 deletions(-)

diff --git a/docs/en/docs/admin-manual/maint-monitor/tablet-restore-tool.md 
b/docs/en/docs/admin-manual/maint-monitor/tablet-restore-tool.md
new file mode 100644
index 00000000000..a33c1dc5ca5
--- /dev/null
+++ b/docs/en/docs/admin-manual/maint-monitor/tablet-restore-tool.md
@@ -0,0 +1,136 @@
+---
+{
+    "title": "Tablet Restore Tool",
+    "language": "en"
+}
+---
+
+<!-- 
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# Tablet Restore Tool
+
+## Restore data from BE Recycle Bin
+
+During the user's use of Doris, some valid tablets (including metadata and 
data) may be deleted due to some misoperations or online bugs. In order to 
prevent data loss in these abnormal situations, Doris provides a recycle bin 
mechanism to protect user data. Tablet data deleted by users will not be 
deleted directly, but will be stored in the recycle bin for a period of time. 
After a period of time, there will be a regular cleaning mechanism to delete 
expired data. The data in the recycle [...]
+
+```
+/root_path/trash/time_label/tablet_id/schema_hash/
+```
+
+* `root_path`: a data root directory corresponding to the BE node.
+* `trash`: The directory of the recycle bin.
+* `time_label`: Time label, for the uniqueness of the data directory in the 
recycle bin, while recording the data time, use the time label as a 
subdirectory.
+
+When a user finds that online data has been deleted by mistake, he needs to 
recover the deleted tablet from the recycle bin. This tablet data recovery 
function is needed.
+
+BE provides http interface and `restore_tablet_tool.sh` script to achieve this 
function, and supports single tablet operation (single mode) and batch 
operation mode (batch mode).
+
+* In single mode, data recovery of a single tablet is supported.
+* In batch mode, support batch tablet data recovery.
+
+### Operation
+
+#### single mode
+
+1. http request method
+
+    BE provides an http interface for single tablet data recovery, the 
interface is as follows:
+    
+    ```
+    curl -X POST 
"http://be_host:be_webserver_port/api/restore_tablet?tablet_id=11111\&schema_hash=12345";
+    ```
+    
+    The successful results are as follows:
+    
+    ```
+    {"status": "Success", "msg": "OK"}
+    ```
+    
+    If it fails, the corresponding failure reason will be returned. One 
possible result is as follows:
+    
+    ```
+    {"status": "Failed", "msg": "create link path failed"}
+    ```
+
+2. Script mode
+
+    `restore_tablet_tool.sh` can be used to realize the function of single 
tablet data recovery.
+    
+    ```
+    sh tools/restore_tablet_tool.sh -b "http://127.0.0.1:8040"; -t 12345 -s 
11111
+    sh tools/restore_tablet_tool.sh --backend "http://127.0.0.1:8040"; 
--tablet_id 12345 --schema_hash 11111
+    ```
+
+#### batch mode
+
+The batch recovery mode is used to realize the function of recovering multiple 
tablet data.
+
+When using, you need to put the restored tablet id and schema hash in a file 
in a comma-separated format in advance, one tablet per line.
+
+The format is as follows:
+
+```
+12345,11111
+12346,11111
+12347,11111
+```
+
+Then perform the recovery with the following command (assuming the file name 
is: `tablets.txt`):
+
+```
+sh restore_tablet_tool.sh -b "http://127.0.0.1:8040"; -f tablets.txt
+sh restore_tablet_tool.sh --backend "http://127.0.0.1:8040"; --file tablets.txt
+```
+
+## Repair missing or damaged Tablet
+
+In some very special circumstances, such as code bugs, or human misoperation, 
etc., all replicas of some tablets may be lost. In this case, the data has been 
substantially lost. However, in some scenarios, the business still hopes to 
ensure that the query will not report errors even if there is data loss, and 
reduce the perception of the user layer. At this point, we can use the blank 
Tablet to fill the missing replica to ensure that the query can be executed 
normally.
+
+**Note: This operation is only used to avoid the problem of error reporting 
due to the inability to find a queryable replica, and it is impossible to 
recover the data that has been substantially lost.**
+
+1. View Master FE log `fe.log`
+
+    If there is data loss, there will be a log similar to the following in the 
log:
+    
+    ```
+    backend [10001] invalid situation. tablet[20000] has few replica[1], 
replica num setting is [3]
+    ```
+
+    This log indicates that all replicas of tablet 20000 have been damaged or 
lost.
+    
+2. Use blank replicas to fill in missing copies
+
+    After confirming that the data cannot be recovered, you can execute the 
following command to generate blank replicas.
+    
+    ```
+    ADMIN SET FRONTEND CONFIG ("recover_with_empty_tablet" = "true");
+    ```
+
+    * Note: You can first check whether the current version supports this 
parameter through the `ADMIN SHOW FRONTEND CONFIG;` command.
+
+3. A few minutes after the setup is complete, you should see the following log 
in the Master FE log `fe.log`:
+
+    ```
+    tablet 20000 has only one replica 20001 on backend 10001 and it is lost. 
create an empty replica to recover it.
+    ```
+
+    The log indicates that the system has created a blank tablet to fill in 
the missing replica.
+    
+4. Judge whether it has been repaired successfully through query.
\ No newline at end of file
diff --git a/docs/en/docs/ecosystem/udf/native-user-defined-function.md 
b/docs/en/docs/ecosystem/udf/native-user-defined-function.md
new file mode 100644
index 00000000000..abb3bf34892
--- /dev/null
+++ b/docs/en/docs/ecosystem/udf/native-user-defined-function.md
@@ -0,0 +1,271 @@
+---
+{
+    "title": "Native User Defined Function",
+    "language": "en"
+}
+---
+
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements. See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership. The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License. You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied. See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# Native User Defined Function
+UDF is mainly suitable for scenarios where the analytical capabilities that 
users need do not possess. Users can implement custom functions according to 
their own needs, and register with Doris through the UDF framework to expand 
Doris' capabilities and solve user analysis needs.
+
+There are two types of analysis requirements that UDF can meet: UDF and UDAF. 
UDF in this article refers to both.
+
+1. UDF: User-defined function, this function will operate on a single line and 
output a single line result. When users use UDFs for queries, each row of data 
will eventually appear in the result set. Typical UDFs are string operations 
such as concat().
+2. UDAF: User-defined aggregation function. This function operates on multiple 
lines and outputs a single line of results. When the user uses UDAF in the 
query, each group of data after grouping will finally calculate a value and 
expand the result set. A typical UDAF is the set operation sum(). Generally 
speaking, UDAF will be used together with group by.
+
+This document mainly describes how to write a custom UDF function and how to 
use it in Doris.
+
+If users use the UDF function and extend Doris' function analysis, and want to 
contribute their own UDF functions back to the Doris community for other users, 
please see the document [Contribute UDF](./contribute-udf.md).
+
+## Writing UDF functions
+
+Before using UDF, users need to write their own UDF functions under Doris' UDF 
framework. In the `contrib/udf/src/udf_samples/udf_sample.h|cpp` file is a 
simple UDF Demo.
+
+Writing a UDF function requires the following steps.
+
+### Writing functions
+
+Create the corresponding header file and CPP file, and implement the logic you 
need in the CPP file. Correspondence between the implementation function format 
and UDF in the CPP file.
+
+Users can put their own source code in a folder. Taking udf_sample as an 
example, the directory structure is as follows:
+
+```
+└── udf_samples
+  ├── uda_sample.cpp
+  ├── uda_sample.h
+  ├── udf_sample.cpp
+  └── udf_sample.h
+```
+
+#### Non-variable parameters
+
+For UDFs with non-variable parameters, the correspondence between the two is 
straightforward.
+For example, the UDF of `INT MyADD(INT, INT)` will correspond to `IntVal 
AddUdf(FunctionContext* context, const IntVal& arg1, const IntVal& arg2)`.
+
+1. `AddUdf` can be any name, as long as it is specified when creating UDF.
+2. The first parameter in the implementation function is always 
`FunctionContext*`. The implementer can obtain some query-related content 
through this structure, and apply for some memory to be used. The specific 
interface used can refer to the definition in `udf/udf.h`.
+3. In the implementation function, the second parameter needs to correspond to 
the UDF parameter one by one, for example, `IntVal` corresponds to `INT` type. 
All types in this part must be referenced with `const`.
+4. The return parameter must correspond to the type of UDF parameter.
+
+#### variable parameter
+
+For variable parameters, you can refer to the following example, corresponding 
to UDF`String md5sum(String, ...)`
+The implementation function is `StringVal md5sumUdf(FunctionContext* ctx, int 
num_args, const StringVal* args)`
+
+1. `md5sumUdf` can also be changed arbitrarily, just specify it when creating.
+2. The first parameter is the same as the non-variable parameter function, and 
the passed in is a `FunctionContext*`.
+3. The variable parameter part consists of two parts. First, an integer is 
passed in, indicating that there are several parameters behind. An array of 
variable parameter parts is passed in later.
+
+#### Type correspondence
+
+|UDF Type|Argument Type|
+|----|---------|
+|TinyInt|TinyIntVal|
+|SmallInt|SmallIntVal|
+|Int|IntVal|
+|BigInt|BigIntVal|
+|LargeInt|LargeIntVal|
+|Float|FloatVal|
+|Double|DoubleVal|
+|Date|DateTimeVal|
+|Datetime|DateTimeVal|
+|Char|StringVal|
+|Varchar|StringVal|
+|Decimal|DecimalVal|
+
+
+## Compile UDF function
+
+Since the UDF implementation relies on Doris' UDF framework, the first step in 
compiling UDF functions is to compile Doris, that is, the UDF framework.
+
+After the compilation is completed, the static library file of the UDF 
framework will be generated. Then introduce the UDF framework dependency and 
compile the UDF.
+
+### Compile Doris
+
+Running `sh build.sh` in the root directory of Doris will generate a static 
library file of the UDF framework `headers|libs` in `output/udf/`
+
+```
+├── output
+│ └── udf
+│ ├── include
+│ │ ├── uda_test_harness.h
+│ │ └── udf.h
+│ └── lib
+│ └── libDorisUdf.a
+
+```
+
+### Writing UDF compilation files
+
+1. Prepare thirdparty
+
+    The thirdparty folder is mainly used to store thirdparty libraries that 
users' UDF functions depend on, including header files and static libraries. It 
must contain the two files `udf.h` and `libDorisUdf.a` in the dependent Doris 
UDF framework.
+
+    Taking udf_sample as an example here, the source code is stored in the 
user's own `udf_samples` directory. Create a thirdparty folder in the same 
directory to store the static library. The directory structure is as follows:
+
+    ```
+    ├── thirdparty
+    │  │── include
+    │  │ └── udf.h
+    │  └── lib
+    │    └── libDorisUdf.a
+    └── udf_samples
+
+    ```
+
+    `udf.h` is the UDF frame header file. The storage path is 
`doris/output/udf/include/udf.h`. Users need to copy the header file in the 
Doris compilation output to their include folder of `thirdparty`.
+
+    `libDorisUdf.a` is a static library of UDF framework. After Doris is 
compiled, the file is stored in `doris/output/udf/lib/libDorisUdf.a`. The user 
needs to copy the file to the lib folder of his `thirdparty`.
+
+    *Note: The static library of UDF framework will not be generated until 
Doris is compiled.
+
+2. Prepare to compile UDF's CMakeFiles.txt
+
+    CMakeFiles.txt is used to declare how UDF functions are compiled. Stored 
in the source code folder, level with user code. Here, taking udf_samples as an 
example, the directory structure is as follows:
+
+    ```
+    ├── thirdparty
+    └── udf_samples
+      ├── CMakeLists.txt
+      ├── uda_sample.cpp
+      ├── uda_sample.h
+      ├── udf_sample.cpp
+      └── udf_sample.h
+    ```
+
+    + Need to show declaration reference `libDorisUdf.a`
+    + Declare `udf.h` header file location
+
+
+Take udf_sample as an example
+
+```
+# Include udf
+include_directories(../thirdparty/include)
+
+# Set all libraries
+add_library(udf STATIC IMPORTED)
+set_target_properties(udf PROPERTIES IMPORTED_LOCATION 
../thirdparty/lib/libDorisUdf.a)
+
+# where to put generated libraries
+set(LIBRARY_OUTPUT_PATH "src/udf_samples")
+
+# where to put generated binaries
+set(EXECUTABLE_OUTPUT_PATH "src/udf_samples")
+
+add_library(udfsample SHARED udf_sample.cpp)
+  target_link_libraries(udfsample
+  udf
+  -static-libstdc++
+  -static-libgcc
+)
+
+add_library(udasample SHARED uda_sample.cpp)
+  target_link_libraries(udasample
+  udf
+  -static-libstdc++
+  -static-libgcc
+)
+```
+
+If the user's UDF function also depends on other thirdparty libraries, you 
need to declare include, lib, and add dependencies in `add_library`.
+
+The complete directory structure after all files are prepared is as follows:
+
+```
+    ├── thirdparty
+    │ │── include
+    │ │ └── udf.h
+    │ └── lib
+    │ └── libDorisUdf.a
+    └── udf_samples
+      ├── CMakeLists.txt
+      ├── uda_sample.cpp
+      ├── uda_sample.h
+      ├── udf_sample.cpp
+      └── udf_sample.h
+```
+
+Prepare the above files and you can compile UDF directly
+
+### Execute compilation
+
+Create a build folder under the udf_samples folder to store the compilation 
output.
+
+Run the command `cmake ../` in the build folder to generate a Makefile, and 
execute make to generate the corresponding dynamic library.
+
+```
+├── thirdparty
+├── udf_samples
+  └── build
+```
+
+### Compilation result
+
+After the compilation is completed, the UDF dynamic link library is 
successfully generated. Under `build/src/`, taking udf_samples as an example, 
the directory structure is as follows:
+
+```
+├── thirdparty
+├── udf_samples
+  └── build
+    └── src
+      └── udf_samples
+        ├── libudasample.so
+        └── libudfsample.so
+
+```
+
+## Create UDF function
+
+After following the above steps, you can get the UDF dynamic library (that is, 
the `.so` file in the compilation result). You need to put this dynamic library 
in a location that can be accessed through the HTTP protocol.
+
+Then log in to the Doris system and create a UDF function in the mysql-client 
through the `CREATE FUNCTION` syntax. You need to have ADMIN authority to 
complete this operation. At this time, there will be a UDF created in the Doris 
system.
+
+```
+CREATE [AGGREGATE] FUNCTION
+name ([argtype][,...])
+[RETURNS] rettype
+PROPERTIES (["key"="value"][,...])
+```
+Description:
+
+1. "Symbol" in PROPERTIES means that the symbol corresponding to the entry 
function is executed. This parameter must be set. You can get the corresponding 
symbol through the `nm` command, for example, 
`_ZN9doris_udf6AddUdfEPNS_15FunctionContextERKNS_6IntValES4_` obtained by `nm 
libudfsample.so | grep AddUdf` is the corresponding symbol.
+2. The object_file in PROPERTIES indicates where it can be downloaded to the 
corresponding dynamic library. This parameter must be set.
+3. name: A function belongs to a certain DB, and the name is in the form of 
`dbName`.`funcName`. When `dbName` is not explicitly specified, the db where 
the current session is located is used as `dbName`.
+
+For specific use, please refer to `CREATE FUNCTION` for more detailed 
information.
+
+## Use UDF
+
+Users must have the `SELECT` permission of the corresponding database to use 
UDF/UDAF.
+
+The use of UDF is consistent with ordinary function methods. The only 
difference is that the scope of built-in functions is global, and the scope of 
UDF is internal to DB. When the link session is inside the data, directly using 
the UDF name will find the corresponding UDF inside the current DB. Otherwise, 
the user needs to display the specified UDF database name, such as 
`dbName`.`funcName`.
+
+In current version, vectorization needs to be turned off to use native udf  
+```
+set enable_vectorized_engine = false;
+```
+
+
+## Delete UDF
+
+When you no longer need UDF functions, you can delete a UDF function by the 
following command, you can refer to `DROP FUNCTION`.
diff --git a/docs/en/docs/lakehouse/multi-catalog/faq-multi-catalog.md 
b/docs/en/docs/lakehouse/multi-catalog/faq-multi-catalog.md
new file mode 100644
index 00000000000..1da05cb5610
--- /dev/null
+++ b/docs/en/docs/lakehouse/multi-catalog/faq-multi-catalog.md
@@ -0,0 +1,235 @@
+---
+{
+    "title": "FAQ About Multi-catalog",
+    "language": "en"
+}
+---
+
+<!-- 
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+# FAQ
+
+1. What to do with errors such as  `failed to get schema` and  `Storage schema 
reading not supported`  when accessing Icerberg tables via Hive Metastore?
+
+   To fix this, please place the Jar file package of `iceberg` runtime in the 
`lib/` directory of Hive.
+
+   And configure as follows in  `hive-site.xml` :
+
+   ```
+   
metastore.storage.schema.reader.impl=org.apache.hadoop.hive.metastore.SerDeStorageSchemaReader
+   ```
+
+   After configuring, please restart Hive Metastore.
+
+2. What to do with the `GSS initiate failed` error when connecting to Hive 
Metastore with Kerberos authentication?
+
+   Usually it is caused by incorrect Kerberos authentication information, you 
can troubleshoot by the following steps:
+
+   1. In versions before  1.2.1, the libhdfs3 library that Doris depends on 
does not enable gsasl. Please update to a version later than 1.2.2.
+   2. Confirm that the correct keytab and principal are set for each 
component, and confirm that the keytab file exists on all FE and BE nodes.
+
+       1. `hadoop.kerberos.keytab`/`hadoop.kerberos.principal`: for Hadoop HDFS
+       2. `hive.metastore.kerberos.principal`: for hive metastore.
+
+   3. Try to replace the IP in the principal with a domain name (do not use 
the default `_HOST` placeholder)
+   4. Confirm that the `/etc/krb5.conf` file exists on all FE and BE nodes.
+
+3. What to do with the`java.lang.VerifyError: xxx` error when accessing HDFS 
3.x?
+
+   Doris 1.2.1 and the older versions rely on Hadoop 2.8. Please update Hadoop 
to 2.10.2 or update Doris to 1.2.2 or newer.
+
+4. An error is reported when using KMS to access HDFS: 
`java.security.InvalidKeyException: Illegal key size`
+
+    Upgrade the JDK version to a version >= Java 8 u162. Or download and 
install the JCE Unlimited Strength Jurisdiction Policy Files corresponding to 
the JDK.
+
+5. When querying a table in ORC format, FE reports an error `Could not obtain 
block` or `Caused by: java.lang.NoSuchFieldError: types`
+
+    For ORC files, by default, FE will access HDFS to obtain file information 
and split files. In some cases, FE may not be able to access HDFS. It can be 
solved by adding the following parameters:
+
+    `"hive.exec.orc.split.strategy" = "BI"`
+
+    Other options: HYBRID (default), ETL.
+
+6. An error is reported when connecting to SQLServer through JDBC Catalog: 
`unable to find valid certification path to requested target`
+
+    Please add `trustServerCertificate=true` option in `jdbc_url`.
+
+7. When connecting to the MySQL database through the JDBC Catalog, the Chinese 
characters are garbled, or the Chinese character condition query is incorrect
+
+    Please add `useUnicode=true&characterEncoding=utf-8` in `jdbc_url`
+
+    > Note: After version 1.2.3, these parameters will be automatically added 
when using JDBC Catalog to connect to the MySQL database.
+
+8. An error is reported when connecting to the MySQL database through the JDBC 
Catalog: `Establishing SSL connection without server's identity verification is 
not recommended`
+
+    Please add `useSSL=true` in `jdbc_url`
+
+9. An error is reported when connecting Hive Catalog: `Caused by: 
java.lang.NullPointerException`
+
+    If there is stack trace in fe.log:
+
+    ```
+    Caused by: java.lang.NullPointerException
+        at 
org.apache.hadoop.hive.ql.security.authorization.plugin.AuthorizationMetaStoreFilterHook.getFilteredObjects(AuthorizationMetaStoreFilterHook.java:78)
 ~[hive-exec-3.1.3-core.jar:3.1.3]
+        at 
org.apache.hadoop.hive.ql.security.authorization.plugin.AuthorizationMetaStoreFilterHook.filterDatabases(AuthorizationMetaStoreFilterHook.java:55)
 ~[hive-exec-3.1.3-core.jar:3.1.3]
+        at 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getAllDatabases(HiveMetaStoreClient.java:1548)
 ~[doris-fe.jar:3.1.3]
+        at 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getAllDatabases(HiveMetaStoreClient.java:1542)
 ~[doris-fe.jar:3.1.3]
+        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
~[?:1.8.0_181]
+    ```
+
+    Try adding `"metastore.filter.hook" = 
"org.apache.hadoop.hive.metastore.DefaultMetaStoreFilterHookImpl"` in `create 
catalog` statement.
+
+10. An error is reported when connecting to the Hive database through the Hive 
Catalog: `RemoteException: SIMPLE authentication is not enabled. Available: 
[TOKEN, KERBEROS]`
+
+    If both `show databases` and `show tables` are OK, and the above error 
occurs when querying, we need to perform the following two operations:
+    - Core-site.xml and hdfs-site.xml need to be placed in the fe/conf and 
be/conf directories
+    - The BE node executes the kinit of Kerberos, restarts the BE, and then 
executes the query.
+
+11. If the `show tables` is normal after creating the Hive Catalog, but the 
query report `java.net.UnknownHostException: xxxxx`
+
+    Add a property in CATALOG:
+    ```
+    'fs.defaultFS' = 'hdfs://<your_nameservice_or_actually_HDFS_IP_and_port>'
+    ```
+12. The values of the partition fields in the hudi table can be found on hive, 
but they cannot be found on doris.
+
+    Doris and hive currently query hudi differently. Doris needs to add 
partition fields to the avsc file of the hudi table structure. If not added, it 
will cause Doris to query partition_ Val is empty (even if home. datasource. 
live_sync. partition_fields=partition_val is set)
+
+    ```
+    {
+        "type": "record",
+        "name": "record",
+        "fields": [{
+            "name": "partition_val",
+            "type": [
+                "null",
+                "string"
+                ],
+            "doc": "Preset partition field, empty string when not partitioned",
+            "default": null
+            },
+            {
+            "name": "name",
+            "type": "string",
+            "doc": "名称"
+            },
+            {
+            "name": "create_time",
+            "type": "string",
+            "doc": "创建时间"
+            }
+        ]
+    }
+    ```
+
+13. The table in orc format of Hive 1.x may encounter system column names such 
as `_col0`, `_col1`, `_col2`... in the underlying orc file schema, which need 
to be specified in the catalog configuration. Add `hive.version` to 1.x.x so 
that it will use the column names in the hive table for mapping.
+
+    ```sql
+    CREATE CATALOG hive PROPERTIES (
+        'hive.version' = '1.x.x'
+    );
+    ```
+
+14. When using JDBC Catalog to synchronize MySQL data to Doris, the date data 
synchronization error occurs. It is necessary to check whether the MySQL 
version corresponds to the MySQL driver package. For example, the driver 
com.mysql.cj.jdbc.Driver is required for MySQL8 and above.
+
+    Starting from version 2.0.2, this file can be placed in BE's `custom_lib/` 
directory (if it does not exist, just create it manually) to prevent the file 
from being lost due to the replacement of the lib directory when upgrading the 
cluster.
+
+## HDFS
+
+15. If an error is reported while configuring Kerberos in the catalog: `SIMPLE 
authentication is not enabled. Available:[TOKEN, KERBEROS]`.
+
+    Need to put `core-site.xml` to the `"${DORIS_HOME}/be/conf"` directory.
+
+    If an error is reported while accessing HDFS: `No common protection layer 
between client and server`, check the `hadoop.rpc.protection` on the client and 
server to make them consistent.
+
+    ```
+        <?xml version="1.0" encoding="UTF-8"?>
+        <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
+        
+        <configuration>
+        
+            <property>
+                <name>hadoop.security.authentication</name>
+                <value>kerberos</value>
+            </property>
+        
+        </configuration>
+    ```
+
+16. The solutions when configuring Kerberos in the catalog and encounter an 
error: `Unable to obtain password from user`.
+    - The principal used must exist in the klist, use `klist -kt your.keytab` 
to check.
+    - Ensure the catalog configuration correct, such as missing the 
`yarn.resourcemanager.principal`.
+    - If the preceding checks are correct, the JDK version installed by yum or 
other package-management utility in the current system maybe have an 
unsupported encryption algorithm. It is recommended to install JDK by yourself 
and set `JAVA_HOME` environment variable.
+
+17. If an error is reported while querying the catalog with Kerberos: 
`GSSException: No valid credentials provided (Mechanism level: Failed to find 
any Kerberos Ticket)`.
+    - Restarting FE and BE can solve the problem in most cases. 
+    - Before the restart all the nodes, can put 
`-Djavax.security.auth.useSubjectCredsOnly=false` to the `JAVA_OPTS` in 
`"${DORIS_HOME}/be/conf/be.conf"`, which can obtain credentials through the 
underlying mechanism, rather than through the application.
+    - Get more solutions to common JAAS errors from the [JAAS 
Troubleshooting](https://docs.oracle.com/javase/8/docs/technotes/guides/security/jgss/tutorials/Troubleshooting.html).
+
+18. If an error related to the Hive Metastore is reported while querying the 
catalog: `Invalid method name`.
+
+    Configure the `hive.version`.
+
+    ```sql
+    CREATE CATALOG hive PROPERTIES (
+        'hive.version' = '2.x.x'
+    );
+    ```
+19. Use Hedged Read to optimize the problem of slow HDFS reading.
+
+     In some cases, the high load of HDFS may lead to a long time to read the 
data on HDFS, thereby slowing down the overall query efficiency. HDFS Client 
provides Hedged Read.
+     This function can start another read thread to read the same data when a 
read request exceeds a certain threshold and is not returned, and whichever is 
returned first will use the result.
+
+     This feature can be enabled in two ways:
+
+     - Specify in the parameters to create the Catalog:
+
+         ```
+         create catalog regression properties (
+             'type'='hms',
+             'hive.metastore.uris' = 'thrift://172.21.16.47:7004',
+             'dfs.client.hedged.read.threadpool.size' = '128',
+             'dfs.client.hedged.read.threshold.millis' = "500"
+         );
+         ```
+
+         `dfs.client.hedged.read.threadpool.size` indicates the number of 
threads used for Hedged Read, which are shared by one HDFS Client. Usually, for 
an HDFS cluster, BE nodes will share an HDFS Client.
+
+         `dfs.client.hedged.read.threshold.millis` is the read threshold in 
milliseconds. When a read request exceeds this threshold and is not returned, 
Hedged Read will be triggered.
+
+     - Configure parameters in be.conf
+
+         ```
+         enable_hdfs_hedged_read = true
+         hdfs_hedged_read_thread_num = 128
+         hdfs_hedged_read_threshold_time = 500
+         ```
+
+         This method will enable Hedged Read globally on BE nodes (not enabled 
by default). And ignore the Hedged Read property set when creating the Catalog.
+
+     After enabling it, you can see related parameters in Query Profile:
+
+     `TotalHedgedRead`: The number of Hedged Reads initiated.
+
+     `HedgedReadWins`: The number of successful Hedged Reads (numbers 
initiated and returned faster than the original request)
+
+     Note that the value here is the cumulative value of a single HDFS Client, 
not the value of a single query. The same HDFS Client will be reused by 
multiple queries.
+
diff --git a/docs/en/docs/lakehouse/multi-catalog/multi-catalog.md 
b/docs/en/docs/lakehouse/multi-catalog/multi-catalog.md
index e79ca063d1a..c47f27234b2 100644
--- a/docs/en/docs/lakehouse/multi-catalog/multi-catalog.md
+++ b/docs/en/docs/lakehouse/multi-catalog/multi-catalog.md
@@ -1,6 +1,6 @@
 ---
 {
-    "title": "Multi-Catalog Overview",
+    "title": "Multi-catalog Overview",
     "language": "en"
 }
 ---
@@ -27,7 +27,7 @@ under the License.
 
 # Overview
 
-Multi-Catalog is designed to make it easier to connect to external data 
catalogs to enhance Doris's data lake analysis and federated data query 
capabilities.
+Multi-catalog is designed to make it easier to connect to external data 
catalogs to enhance Doris's data lake analysis and federated data query 
capabilities.
 
 In older versions of Doris, user data is in a two-tiered structure: database 
and table. Thus, connections to external catalogs could only be done at the 
database or table level. For example, users could create a mapping to a table 
in an external catalog via `create external table`, or to a database via 
`create external database` . If there were large amounts of databases or tables 
in the external catalog, users would need to create mappings to them one by 
one, which could be a heavy workload.
 
diff --git 
a/docs/en/docs/sql-manual/sql-reference/Database-Administration-Statements/enable-feature.md
 
b/docs/en/docs/sql-manual/sql-reference/Database-Administration-Statements/enable-feature.md
new file mode 100644
index 00000000000..21eb7d440b3
--- /dev/null
+++ 
b/docs/en/docs/sql-manual/sql-reference/Database-Administration-Statements/enable-feature.md
@@ -0,0 +1,38 @@
+---
+{
+    "title": "ENABLE FEATURE",
+    "language": "en"
+}
+---
+
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+## ENABLE-FEATURE
+
+### Description
+
+### Example
+
+### Keywords
+
+    ENABLE, FEATURE
+
+### Best Practice
+
diff --git a/docs/sidebars.json b/docs/sidebars.json
index 15225e319c1..db0caa5e08e 100644
--- a/docs/sidebars.json
+++ b/docs/sidebars.json
@@ -173,7 +173,9 @@
                 "advanced/small-file-mgr",
                 "advanced/cold-hot-separation",
                 "advanced/compute-node",
-                "advanced/lateral-view"
+                "advanced/lateral-view",
+                "advanced/is-being-synced",
+                "advanced/sql-mode"
             ]
         },
         {
@@ -203,7 +205,7 @@
             "items": [
                 {
                     "type": "category",
-                    "label": "Multi Catalog",
+                    "label": "Multi-catalog",
                     "items": [
                         "lakehouse/multi-catalog/multi-catalog",
                         "lakehouse/multi-catalog/hive",
@@ -213,7 +215,8 @@
                         "lakehouse/multi-catalog/dlf",
                         "lakehouse/multi-catalog/max-compute",
                         "lakehouse/multi-catalog/es",
-                        "lakehouse/multi-catalog/jdbc"
+                        "lakehouse/multi-catalog/jdbc",
+                        "lakehouse/multi-catalog/faq-multi-catalog"
                     ]
                 },
                 "lakehouse/file",
@@ -246,7 +249,8 @@
                     "items": [
                         "ecosystem/udf/contribute-udf",
                         "ecosystem/udf/remote-user-defined-function",
-                        "ecosystem/udf/java-user-defined-function"
+                        "ecosystem/udf/java-user-defined-function",
+                        "ecosystem/udf/native-user-defined-function"
                     ]
                 },
                 {
@@ -859,7 +863,8 @@
                                 
"sql-manual/sql-reference/Database-Administration-Statements/RECOVER",
                                 
"sql-manual/sql-reference/Database-Administration-Statements/KILL",
                                 
"sql-manual/sql-reference/Database-Administration-Statements/ADMIN-REBALANCE-DISK",
-                                
"sql-manual/sql-reference/Database-Administration-Statements/ADMIN-CANCEL-REBALANCE-DISK"
+                                
"sql-manual/sql-reference/Database-Administration-Statements/ADMIN-CANCEL-REBALANCE-DISK",
+                                
"sql-manual/sql-reference/Database-Administration-Statements/enable-feature"
                             ]
                         },
                         {
@@ -1169,6 +1174,7 @@
                         "admin-manual/maint-monitor/be-olap-error-code",
                         "admin-manual/maint-monitor/doris-error-code",
                         "admin-manual/maint-monitor/tablet-meta-tool",
+                        "admin-manual/maint-monitor/tablet-restore-tool",
                         "admin-manual/maint-monitor/monitor-alert",
                         "admin-manual/maint-monitor/tablet-local-debug",
                         "admin-manual/maint-monitor/metadata-operation",
diff --git a/docs/zh-CN/docs/admin-manual/maint-monitor/tablet-restore-tool.md 
b/docs/zh-CN/docs/admin-manual/maint-monitor/tablet-restore-tool.md
new file mode 100644
index 00000000000..aba46d8f4f7
--- /dev/null
+++ b/docs/zh-CN/docs/admin-manual/maint-monitor/tablet-restore-tool.md
@@ -0,0 +1,142 @@
+---
+{
+    "title": "Tablet 恢复工具",
+    "language": "zh-CN"
+}
+---
+
+<!-- 
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# Tablet 恢复工具
+
+## 从 BE 回收站中恢复数据
+
+用户在使用Doris的过程中,可能会发生因为一些误操作或者线上bug,导致一些有效的tablet被删除(包括元数据和数据)。为了防止在这些异常情况出现数据丢失,Doris提供了回收站机制,来保护用户数据。用户删除的tablet数据不会被直接删除,会被放在回收站中存储一段时间,在一段时间之后会有定时清理机制将过期的数据删除。回收站中的数据包括:tablet的data文件(.dat),tablet的索引文件(.idx)和tablet的元数据文件(.hdr)。数据将会存放在如下格式的路径:
+
+```
+/root_path/trash/time_label/tablet_id/schema_hash/
+```
+
+* `root_path`:对应BE节点的某个数据根目录。
+* `trash`:回收站的目录。
+* `time_label`:时间标签,为了回收站中数据目录的唯一性,同时记录数据时间,使用时间标签作为子目录。
+
+当用户发现线上的数据被误删除,需要从回收站中恢复被删除的tablet,需要用到这个tablet数据恢复功能。
+
+BE提供http接口和 `restore_tablet_tool.sh` 脚本实现这个功能,支持单tablet操作(single 
mode)和批量操作模式(batch mode)。
+
+* 在single mode下,支持单个tablet的数据恢复。
+* 在batch mode下,支持批量tablet的数据恢复。
+
+### 操作
+
+#### single mode
+
+1. http请求方式
+
+    BE中提供单个tablet数据恢复的http接口,接口如下:
+    
+    ```
+    curl -X POST 
"http://be_host:be_webserver_port/api/restore_tablet?tablet_id=11111\&schema_hash=12345";
+    ```
+    
+    成功的结果如下:
+    
+    ```
+    {"status": "Success", "msg": "OK"}
+    ```
+    
+    失败的话,会返回相应的失败原因,一种可能的结果如下:
+    
+    ```
+    {"status": "Failed", "msg": "create link path failed"}
+    ```
+
+2. 脚本方式
+
+    `restore_tablet_tool.sh` 可用来实现单tablet数据恢复的功能。
+    
+    ```
+    sh tools/restore_tablet_tool.sh -b "http://127.0.0.1:8040"; -t 12345 -s 
11111
+    sh tools/restore_tablet_tool.sh --backend "http://127.0.0.1:8040"; 
--tablet_id 12345 --schema_hash 11111
+    ```
+
+#### batch mode
+
+批量恢复模式用于实现恢复多个tablet数据的功能。
+
+使用的时候需要预先将恢复的tablet id和schema hash按照逗号分隔的格式放在一个文件中,一个tablet一行。
+
+格式如下:
+
+```
+12345,11111
+12346,11111
+12347,11111
+```
+
+然后如下的命令进行恢复(假设文件名为:`tablets.txt`):
+
+```
+sh restore_tablet_tool.sh -b "http://127.0.0.1:8040"; -f tablets.txt
+sh restore_tablet_tool.sh --backend "http://127.0.0.1:8040"; --file tablets.txt
+```
+
+## 修复缺失或损坏的 Tablet
+
+在某些极特殊情况下,如代码BUG、或人为误操作等,可能导致部分分片的全部副本都丢失。这种情况下,数据已经实质性的丢失。但是在某些场景下,业务依然希望能够在即使有数据丢失的情况下,保证查询正常不报错,降低用户层的感知程度。此时,我们可以通过使用空白Tablet填充丢失副本的功能,来保证查询能够正常执行。
+
+**注:该操作仅用于规避查询因无法找到可查询副本导致报错的问题,无法恢复已经实质性丢失的数据**
+
+1. 查看 Master FE 日志 `fe.log`
+
+    如果出现数据丢失的情况,则日志中会有类似如下日志:
+    
+    ```
+    backend [10001] invalid situation. tablet[20000] has few replica[1], 
replica num setting is [3]
+    ```
+
+    这个日志表示,Tablet 20000 的所有副本已损坏或丢失。
+    
+2. 使用空白副本填补缺失副本
+
+    当确认数据已经无法恢复后,可以通过执行以下命令,生成空白副本。
+    
+    ```
+    ADMIN SET FRONTEND CONFIG ("recover_with_empty_tablet" = "true");
+    ```
+
+    * 注:可以先通过 `ADMIN SHOW FRONTEND CONFIG;` 命令查看当前版本是否支持该参数。
+
+3. 设置完成几分钟后,应该会在 Master FE 日志 `fe.log` 中看到如下日志:
+
+    ```
+    tablet 20000 has only one replica 20001 on backend 10001 and it is lost. 
create an empty replica to recover it.
+    ```
+
+    该日志表示系统已经创建了一个空白 Tablet 用于填补缺失副本。
+    
+4. 通过查询来判断是否已经修复成功。
+
+5. 全部修复成功后,通过以下命令关闭 `recover_with_empty_tablet` 参数:
+
+    ```
+    ADMIN SET FRONTEND CONFIG ("recover_with_empty_tablet" = "false");
+    ```
diff --git a/docs/zh-CN/docs/ecosystem/native-user-defined-function.md 
b/docs/zh-CN/docs/ecosystem/native-user-defined-function.md
new file mode 100644
index 00000000000..f113b78b613
--- /dev/null
+++ b/docs/zh-CN/docs/ecosystem/native-user-defined-function.md
@@ -0,0 +1,278 @@
+---
+{
+    "title": "原生UDF",
+    "language": "zh-CN"
+}
+---
+
+<!-- 
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# UDF
+
+<version deprecated="1.2" comment="请使用 JAVA UDF">
+
+UDF 主要适用于,用户需要的分析能力 Doris 并不具备的场景。用户可以自行根据自己的需求,实现自定义的函数,并且通过 UDF 框架注册到 Doris 
中,来扩展 Doris 的能力,并解决用户分析需求。
+
+UDF 能满足的分析需求分为两种:UDF 和 UDAF。本文中的 UDF 指的是二者的统称。
+
+1. UDF: 用户自定义函数,这种函数会对单行进行操作,并且输出单行结果。当用户在查询时使用 UDF ,每行数据最终都会出现在结果集中。典型的 UDF 
比如字符串操作 concat() 等。
+2. UDAF: 用户自定义的聚合函数,这种函数对多行进行操作,并且输出单行结果。当用户在查询时使用 
UDAF,分组后的每组数据最后会计算出一个值并展结果集中。典型的 UDAF 比如集合操作 sum() 等。一般来说 UDAF 都会结合 group by 
一起使用。
+
+这篇文档主要讲述了,如何编写自定义的 UDF 函数,以及如何在 Doris 中使用它。
+
+如果用户使用 UDF 功能并扩展了 Doris 的函数分析,并且希望将自己实现的 UDF 函数贡献回 Doris 社区给其他用户使用,这时候请看文档 
[Contribute UDF](./contribute-udf.md)。
+
+</version>
+
+## 编写 UDF 函数
+
+在使用UDF之前,用户需要先在 Doris 的 UDF 
框架下,编写自己的UDF函数。在`contrib/udf/src/udf_samples/udf_sample.h|cpp`文件中是一个简单的 UDF 
Demo。
+
+编写一个 UDF 函数需要以下几个步骤。
+
+### 编写函数
+
+创建对应的头文件、CPP文件,在CPP文件中实现你需要的逻辑。CPP文件中的实现函数格式与UDF的对应关系。
+
+用户可以把自己的 source code 统一放在一个文件夹下。这里以 udf_sample 为例,目录结构如下:
+
+```
+└── udf_samples
+  ├── uda_sample.cpp
+  ├── uda_sample.h
+  ├── udf_sample.cpp
+  └── udf_sample.h
+```
+
+#### 非可变参数
+
+对于非可变参数的UDF,那么两者之间的对应关系很直接。
+比如`INT MyADD(INT, INT)`的UDF就会对应`IntVal AddUdf(FunctionContext* context, const 
IntVal& arg1, const IntVal& arg2)`。
+
+1. `AddUdf`可以为任意的名字,只要创建UDF的时候指定即可。
+2. 
实现函数中的第一个参数永远是`FunctionContext*`。实现者可以通过这个结构体获得一些查询相关的内容,以及申请一些需要使用的内存。具体使用的接口可以参考`udf/udf.h`中的定义。
+3. 实现函数中从第二个参数开始需要与UDF的参数一一对应,比如`IntVal`对应`INT`类型。这部分的类型都要使用`const`引用。
+4. 返回参数与UDF的参数的类型要相对应。
+
+#### 可变参数
+
+对于可变参数,可以参见以下例子,UDF`String md5sum(String, ...)`对应的
+实现函数是`StringVal md5sumUdf(FunctionContext* ctx, int num_args, const StringVal* 
args)`
+
+1. `md5sumUdf`这个也是可以任意改变的,创建的时候指定即可。
+2. 第一个参数与非可变参数函数一样,传入的是一个`FunctionContext*`。
+3. 可变参数部分由两部分组成,首先会传入一个整数,说明后面还有几个参数。后面传入的是一个可变参数部分的数组。
+
+#### 类型对应关系
+
+|UDF Type|Argument Type|
+|----|---------|
+|TinyInt|TinyIntVal|
+|SmallInt|SmallIntVal|
+|Int|IntVal|
+|BigInt|BigIntVal|
+|LargeInt|LargeIntVal|
+|Float|FloatVal|
+|Double|DoubleVal|
+|Date|DateTimeVal|
+|Datetime|DateTimeVal|
+|Char|StringVal|
+|Varchar|StringVal|
+|Decimal|DecimalVal|
+
+
+## 编译 UDF 函数
+
+由于 UDF 实现中依赖了 Doris 的 UDF 框架 , 所以在编译 UDF 函数的时候首先要对 Doris 进行编译,也就是对 UDF 框架进行编译。
+
+编译完成后会生成,UDF 框架的静态库文件。之后引入 UDF 框架依赖,并编译 UDF 即可。
+
+### 编译Doris
+
+在 Doris 根目录下执行 `sh build.sh` 就会在 `output/udf/` 生成 UDF 框架的静态库文件 `headers|libs`
+
+```
+├── output
+│   └── udf
+│       ├── include
+│       │   ├── uda_test_harness.h
+│       │   └── udf.h
+│       └── lib
+│           └── libDorisUdf.a
+
+```
+
+### 编写 UDF 编译文件
+
+1. 准备 thirdparty
+
+   `thirdparty` 文件夹主要用于存放用户 UDF 函数依赖的第三方库,包括头文件及静态库。其中必须包含依赖的 Doris UDF 框架中 
`udf.h` 和 `libDorisUdf.a` 这两个文件。
+
+   这里以 `udf_sample` 为例, 在 用户自己 `udf_samples` 目录用于存放 source code。在同级目录下再创建一个 
`thirdparty` 文件夹用于存放静态库。目录结构如下:
+
+    ```
+    ├── thirdparty
+    │ │── include
+    │ │ └── udf.h
+    │ └── lib
+    │   └── libDorisUdf.a
+    └── udf_samples
+
+    ```
+
+   `udf.h` 是 UDF 框架头文件。存放路径为 `doris/output/udf/include/udf.h`。 用户需要将 Doris 
编译产出中的这个头文件拷贝到自己的 `thirdparty` 的 include 文件夹下。
+
+   `libDorisUdf.a`  是 UDF 框架的静态库。Doris 编译完成后该文件存放在 
`doris/output/udf/lib/libDorisUdf.a`。用户需要将该文件拷贝到自己的 `thirdparty` 的 lib 文件夹下。
+
+   *注意:UDF 框架的静态库只有完成 Doris 编译后才会生成。
+
+2. 准备编译 UDF 的 CMakeFiles.txt
+
+   CMakeFiles.txt 用于声明 UDF 函数如何进行编译。存放在源码文件夹下,与用户代码平级。这里以 `udf_samples` 
为例目录结构如下:
+
+    ```
+    ├── thirdparty
+    └── udf_samples
+      ├── CMakeLists.txt
+      ├── uda_sample.cpp
+      ├── uda_sample.h
+      ├── udf_sample.cpp
+      └── udf_sample.h
+    ```
+
+    + 需要显示声明引用 `libDorisUdf.a`
+    + 声明 `udf.h` 头文件位置
+
+
+以 udf_sample 为例    
+
+```
+# Include udf
+include_directories(../thirdparty/include)    
+
+# Set all libraries
+add_library(udf STATIC IMPORTED)
+set_target_properties(udf PROPERTIES IMPORTED_LOCATION 
../thirdparty/lib/libDorisUdf.a)
+
+# where to put generated libraries
+set(LIBRARY_OUTPUT_PATH "src/udf_samples")
+
+# where to put generated binaries
+set(EXECUTABLE_OUTPUT_PATH "src/udf_samples")
+
+add_library(udfsample SHARED udf_sample.cpp)
+  target_link_libraries(udfsample
+  udf
+  -static-libstdc++
+  -static-libgcc
+)
+
+add_library(udasample SHARED uda_sample.cpp)
+  target_link_libraries(udasample
+  udf
+  -static-libstdc++
+  -static-libgcc
+)
+```
+
+如果用户的 UDF 函数还依赖了其他的三方库,则需要声明 include,lib,并在 `add_library` 中增加依赖。
+
+所有文件准备齐后完整的目录结构如下:
+
+```
+    ├── thirdparty
+    │ │── include
+    │ │ └── udf.h
+    │ └── lib
+    │   └── libDorisUdf.a
+    └── udf_samples
+      ├── CMakeLists.txt
+      ├── uda_sample.cpp
+      ├── uda_sample.h
+      ├── udf_sample.cpp
+      └── udf_sample.h
+```
+
+准备好上述文件就可以直接编译 UDF 了
+
+### 执行编译
+
+在 udf_samples 文件夹下创建一个 build 文件夹,用于存放编译产出。
+
+在 build 文件夹下运行命令 `cmake ../` 生成Makefile,并执行 make 就会生成对应动态库。
+
+```
+├── thirdparty
+├── udf_samples
+  └── build
+```
+
+### 编译结果
+
+编译完成后的 UDF 动态链接库就生成成功了。在 `build/src/` 下,以 udf_samples 为例,目录结构如下:
+
+```
+
+├── thirdparty
+├── udf_samples
+  └── build
+    └── src
+      └── udf_samples
+        ├── libudasample.so
+        └── libudfsample.so
+
+```
+
+## 创建 UDF 函数
+
+通过上述的步骤后,你可以得到 UDF 的动态库(也就是编译结果中的 `.so` 文件)。你需要将这个动态库放到一个能够通过 HTTP 协议访问到的位置。
+
+然后登录 Doris 系统,在 mysql-client 中通过 `CREATE FUNCTION` 语法创建 UDF 
函数。你需要拥有ADMIN权限才能够完成这个操作。这时 Doris 系统内部就会存在刚才创建好的 UDF。
+
+```
+CREATE [AGGREGATE] FUNCTION 
+       name ([argtype][,...])
+       [RETURNS] rettype
+       PROPERTIES (["key"="value"][,...])
+```
+说明:
+
+1. 
PROPERTIES中`symbol`表示的是,执行入口函数的对应symbol,这个参数是必须设定。你可以通过`nm`命令来获得对应的symbol,比如`nm 
libudfsample.so | grep 
AddUdf`获得到的`_ZN9doris_udf6AddUdfEPNS_15FunctionContextERKNS_6IntValES4_`就是对应的symbol。
+2. PROPERTIES中`object_file`表示的是从哪里能够下载到对应的动态库,这个参数是必须设定的。
+3. name: 
一个function是要归属于某个DB的,name的形式为`dbName`.`funcName`。当`dbName`没有明确指定的时候,就是使用当前session所在的db作为`dbName`。
+
+具体使用可以参见 `CREATE FUNCTION` 获取更详细信息。
+
+## 使用 UDF
+
+用户使用 UDF 必须拥有对应数据库的 `SELECT` 权限。
+
+UDF 的使用与普通的函数方式一致,唯一的区别在于,内置函数的作用域是全局的,而 UDF 的作用域是 DB内部。当链接 session 
位于数据内部时,直接使用 UDF 名字会在当前DB内部查找对应的 UDF。否则用户需要显示的指定 UDF 的数据库名字,例如 
`dbName`.`funcName`。
+
+当前版本中,使用原生UDF时还需要将向量化关闭  
+```
+set enable_vectorized_engine = false;
+```
+
+
+## 删除 UDF函数
+
+当你不再需要 UDF 函数时,你可以通过下述命令来删除一个 UDF 函数, 可以参考 `DROP FUNCTION`。
+
diff --git a/docs/zh-CN/docs/lakehouse/multi-catalog/faq-multi-catalog.md 
b/docs/zh-CN/docs/lakehouse/multi-catalog/faq-multi-catalog.md
new file mode 100644
index 00000000000..18cd16e720e
--- /dev/null
+++ b/docs/zh-CN/docs/lakehouse/multi-catalog/faq-multi-catalog.md
@@ -0,0 +1,238 @@
+---
+{
+    "title": "常见问题",
+    "language": "zh-CN"
+}
+---
+
+<!-- 
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+# 常见问题
+
+1. 通过 Hive Metastore 访问 Iceberg 表报错:`failed to get schema` 或 `Storage schema 
reading not supported`
+
+       在 Hive 的 lib/ 目录放上 `iceberg` 运行时有关的 jar 包。
+
+       在 `hive-site.xml` 配置:
+       
+       ```
+       
metastore.storage.schema.reader.impl=org.apache.hadoop.hive.metastore.SerDeStorageSchemaReader
+       ```
+
+       配置完成后需要重启Hive Metastore。
+
+2. 连接 Kerberos 认证的 Hive Metastore 报错:`GSS initiate failed`
+
+    通常是因为 Kerberos 认证信息填写不正确导致的,可以通过以下步骤排查:
+
+    1. 1.2.1 之前的版本中,Doris 依赖的 libhdfs3 库没有开启 gsasl。请更新至 1.2.2 之后的版本。
+    2. 确认对各个组件,设置了正确的 keytab 和 principal,并确认 keytab 文件存在于所有 FE、BE 节点上。
+
+        1. `hadoop.kerberos.keytab`/`hadoop.kerberos.principal`:用于 Hadoop hdfs 
访问,填写 hdfs 对应的值。
+        2. `hive.metastore.kerberos.principal`:用于 hive metastore。
+
+    3. 尝试将 principal 中的 ip 换成域名(不要使用默认的 `_HOST` 占位符)
+    4. 确认 `/etc/krb5.conf` 文件存在于所有 FE、BE 节点上。
+       
+3. 访问 HDFS 3.x 时报错:`java.lang.VerifyError: xxx`
+
+   1.2.1 之前的版本中,Doris 依赖的 Hadoop 版本为 2.8。需更新至 2.10.2。或更新 Doris 至 1.2.2 之后的版本。
+
+4. 使用 KMS 访问 HDFS 时报错:`java.security.InvalidKeyException: Illegal key size`
+
+   升级 JDK 版本到 >= Java 8 u162 的版本。或者下载安装 JDK 相应的 JCE Unlimited Strength 
Jurisdiction Policy Files。
+
+5. 查询 ORC 格式的表,FE 报错 `Could not obtain block` 或 `Caused by: 
java.lang.NoSuchFieldError: types`
+
+   对于 ORC 文件,在默认情况下,FE 会访问 HDFS 获取文件信息,进行文件切分。部分情况下,FE 可能无法访问到 
HDFS。可以通过添加以下参数解决:
+
+   `"hive.exec.orc.split.strategy" = "BI"`
+
+   其他选项:HYBRID(默认),ETL。
+
+6. 通过 JDBC Catalog 连接 SQLServer 报错:`unable to find valid certification path to 
requested target`
+
+   请在 `jdbc_url` 中添加 `trustServerCertificate=true` 选项。
+
+7. 通过 JDBC Catalog 连接 MySQL 数据库,中文字符乱码,或中文字符条件查询不正确
+
+   请在 `jdbc_url` 中添加 `useUnicode=true&characterEncoding=utf-8`
+
+   > 注:1.2.3 版本后,使用 JDBC Catalog 连接 MySQL 数据库,会自动添加这些参数。
+
+8. 通过 JDBC Catalog 连接 MySQL 数据库报错:`Establishing SSL connection without 
server's identity verification is not recommended`
+
+   请在 `jdbc_url` 中添加 `useSSL=true`
+
+9. 连接 Hive Catalog 报错:`Caused by: java.lang.NullPointerException`
+
+    如 fe.log 中有如下堆栈:
+
+    ```
+    Caused by: java.lang.NullPointerException
+        at 
org.apache.hadoop.hive.ql.security.authorization.plugin.AuthorizationMetaStoreFilterHook.getFilteredObjects(AuthorizationMetaStoreFilterHook.java:78)
 ~[hive-exec-3.1.3-core.jar:3.1.3]
+        at 
org.apache.hadoop.hive.ql.security.authorization.plugin.AuthorizationMetaStoreFilterHook.filterDatabases(AuthorizationMetaStoreFilterHook.java:55)
 ~[hive-exec-3.1.3-core.jar:3.1.3]
+        at 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getAllDatabases(HiveMetaStoreClient.java:1548)
 ~[doris-fe.jar:3.1.3]
+        at 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getAllDatabases(HiveMetaStoreClient.java:1542)
 ~[doris-fe.jar:3.1.3]
+        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
~[?:1.8.0_181]
+    ```
+
+    可以尝试在 `create catalog` 语句中添加 `"metastore.filter.hook" = 
"org.apache.hadoop.hive.metastore.DefaultMetaStoreFilterHookImpl"` 解决。
+
+10. 通过 Hive Catalog 连接 Hive 数据库报错:`RemoteException: SIMPLE authentication is 
not enabled.  Available:[TOKEN, KERBEROS]`
+
+   如果在 `show databases` 和 `show tables` 都是没问题的情况下,查询的时候出现上面的错误,我们需要进行下面两个操作:
+- fe/conf、be/conf 目录下需放置 core-site.xml 和 hdfs-site.xml
+   - BE 节点执行 Kerberos 的 kinit 然后重启 BE ,然后再去执行查询即可.
+
+
+11. 如果创建 Hive Catalog 后能正常`show tables`,但查询时报`java.net.UnknownHostException: 
xxxxx`
+
+    可以在 CATALOG 的 PROPERTIES 中添加
+    ```
+    'fs.defaultFS' = 'hdfs://<your_nameservice_or_actually_HDFS_IP_and_port>'
+    ```
+12. 在hive上可以查到hudi表分区字段的值,但是在doris查不到。
+
+    
doris和hive目前查询hudi的方式不一样,doris需要在hudi表结构的avsc文件里添加上分区字段,如果没加,就会导致doris查询partition_val为空(即使设置了hoodie.datasource.hive_sync.partition_fields=partition_val也不可以)
+    ```
+    {
+        "type": "record",
+        "name": "record",
+        "fields": [{
+            "name": "partition_val",
+            "type": [
+                "null",
+                "string"
+                ],
+            "doc": "Preset partition field, empty string when not partitioned",
+            "default": null
+            },
+            {
+            "name": "name",
+            "type": "string",
+            "doc": "名称"
+            },
+            {
+            "name": "create_time",
+            "type": "string",
+            "doc": "创建时间"
+            }
+        ]
+    }
+    ```
+
+13. Hive 1.x 的 orc 格式的表可能会遇到底层 orc 文件 schema 中列名为 `_col0`,`_col1`,`_col2`... 
这类系统列名,此时需要在 catalog 配置中添加 `hive.version` 为 1.x.x,这样就会使用 hive 表中的列名进行映射。
+
+    ```sql
+    CREATE CATALOG hive PROPERTIES (
+        'hive.version' = '1.x.x'
+    );
+    ```
+
+   从 2.0.2 版本起,可以将这个文件放置在BE的 `custom_lib/` 目录下(如不存在,手动创建即可),以防止升级集群时因为 lib 
目录被替换而导致文件丢失。
+
+## HDFS
+
+14. 使用JDBC 
Catalog将MySQL数据同步到Doris中,日期数据同步错误。需要校验下MySQL的版本是否与MySQL的驱动包是否对应,比如MySQL8以上需要使用驱动com.mysql.cj.jdbc.Driver。
+
+15. 在Catalog中配置Kerberos时,如果报错`SIMPLE authentication is not enabled. 
Available:[TOKEN, 
KERBEROS]`,那么需要将`core-site.xml`文件放到`"${DORIS_HOME}/be/conf"`目录下。
+    
+    如果访问HDFS报错`No common protection layer between client and 
server`,检查客户端和服务端的`hadoop.rpc.protection`属性,使他们保持一致。
+    
+    ```
+        <?xml version="1.0" encoding="UTF-8"?>
+        <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
+        
+        <configuration>
+        
+            <property>
+                <name>hadoop.security.authentication</name>
+                <value>kerberos</value>
+            </property>
+            
+        </configuration>
+    ```
+
+16. 在Catalog中配置Kerberos时,报错`Unable to obtain password from user`的解决方法:
+
+    - 用到的principal必须在klist中存在,使用`klist -kt your.keytab`检查。
+    - 检查catalog配置是否正确,比如漏配`yarn.resourcemanager.principal`。
+    - 若上述检查没问题,则当前系统yum或者其他包管理软件安装的JDK版本存在不支持的加密算法,建议自行安装JDK并设置`JAVA_HOME`环境变量。
+
+17. 查询配置了Kerberos的外表,遇到该报错:`GSSException: No valid credentials provided 
(Mechanism level: Failed to find any Kerberos Ticket)`,一般重启FE和BE能够解决该问题。
+
+    - 
重启所有节点前可在`"${DORIS_HOME}/be/conf/be.conf"`中的JAVA_OPTS参数里配置`-Djavax.security.auth.useSubjectCredsOnly=false`,通过底层机制去获取JAAS
 credentials信息,而不是应用程序。
+    - 在[JAAS 
Troubleshooting](https://docs.oracle.com/javase/8/docs/technotes/guides/security/jgss/tutorials/Troubleshooting.html)中可获取更多常见JAAS报错的解决方法。
+
+18. 使用Catalog查询表数据时发现与Hive Metastore相关的报错:`Invalid method 
name`,需要设置`hive.version`参数。
+
+    ```sql
+    CREATE CATALOG hive PROPERTIES (
+        'hive.version' = '1.x.x'
+    );
+    ```
+
+19. 使用 Hedged Read 优化 HDFS 读取慢的问题。
+
+    在某些情况下,HDFS 的负载较高可能导致读取某个 HDFS 上的数据副本的时间较长,从而拖慢整体的查询效率。HDFS Client 提供了 
Hedged Read 功能。
+    该功能可以在一个读请求超过一定阈值未返回时,启动另一个读线程读取同一份数据,哪个先返回就是用哪个结果。
+
+    注意:该功能可能会增加 HDFS 集群的负载,请酌情使用。
+
+    可以通过以下两种方式开启这个功能:
+
+    - 在创建 Catalog 的参数中指定:
+
+        ```
+        create catalog regression properties (
+            'type'='hms',
+            'hive.metastore.uris' = 'thrift://172.21.16.47:7004',
+            'dfs.client.hedged.read.threadpool.size' = '128',
+            'dfs.client.hedged.read.threshold.millis' = "500"
+        );
+        ```
+        
+        `dfs.client.hedged.read.threadpool.size` 表示用于 Hedged Read 的线程数,这些线程由一个 
HDFS Client 共享。通常情况下,针对一个 HDFS 集群,BE 节点会共享一个 HDFS Client。
+
+        `dfs.client.hedged.read.threshold.millis` 
是读取阈值,单位毫秒。当一个读请求超过这个阈值未返回时,会触发 Hedged Read。
+
+    - 在 be.conf 中配置参数
+
+        ```
+        enable_hdfs_hedged_read = true
+        hdfs_hedged_read_thread_num = 128
+        hdfs_hedged_read_threshold_time = 500
+        ```
+
+        这种方式会在BE节点全局开启 Hedged Read(默认不开启)。并忽略在创建 Catalog 时设置的 Hedged Read 属性。
+
+    开启后,可以在 Query Profile 中看到相关参数:
+
+    `TotalHedgedRead`: 发起 Hedged Read 的次数。
+
+    `HedgedReadWins`:Hedged Read 成功的次数(发起并且比原请求更快返回的次数)
+     
+    注意,这里的值是单个 HDFS Client 的累计值,而不是单个查询的数值。同一个 HDFS Client 会被多个查询复用。
+
+
+
+
diff --git 
a/docs/zh-CN/docs/sql-manual/sql-functions/window-functions/window-function-avg.md
 
b/docs/zh-CN/docs/sql-manual/sql-functions/window-functions/window-function-avg.md
index 426df16d7b5..48a50f30ed0 100644
--- 
a/docs/zh-CN/docs/sql-manual/sql-functions/window-functions/window-function-avg.md
+++ 
b/docs/zh-CN/docs/sql-manual/sql-functions/window-functions/window-function-avg.md
@@ -50,4 +50,4 @@ from int_t where property in ('odd','even');
 
 ### keywords
 
-    WINDOW,FUNCTION,AVG
+​    WINDOW,FUNCTION,AVG
\ No newline at end of file
diff --git 
a/docs/zh-CN/docs/sql-manual/sql-functions/window-functions/window-function-max.md
 
b/docs/zh-CN/docs/sql-manual/sql-functions/window-functions/window-function-max.md
index 4b912b579a7..83c51efcf25 100644
--- 
a/docs/zh-CN/docs/sql-manual/sql-functions/window-functions/window-function-max.md
+++ 
b/docs/zh-CN/docs/sql-manual/sql-functions/window-functions/window-function-max.md
@@ -46,4 +46,4 @@ from int_t where property in ('prime','square');
 
 ### keywords
 
-    WINDOW,FUNCTION,MAX
+​    WINDOW,FUNCTION,MAX
\ No newline at end of file
diff --git 
a/docs/zh-CN/docs/sql-manual/sql-functions/window-functions/window-function-ntile.md
 
b/docs/zh-CN/docs/sql-manual/sql-functions/window-functions/window-function-ntile.md
index 76172bd950e..9868a0f7cf7 100644
--- 
a/docs/zh-CN/docs/sql-manual/sql-functions/window-functions/window-function-ntile.md
+++ 
b/docs/zh-CN/docs/sql-manual/sql-functions/window-functions/window-function-ntile.md
@@ -40,4 +40,4 @@ select x, y, ntile(2) over(partition by x order by y) as rank 
from int_t;
 
 ### keywords
 
-    WINDOW,FUNCTION,NTILE
+​    WINDOW,FUNCTION,NTILE
\ No newline at end of file
diff --git 
a/docs/zh-CN/docs/sql-manual/sql-reference/Database-Administration-Statements/enable-feature.md
 
b/docs/zh-CN/docs/sql-manual/sql-reference/Database-Administration-Statements/enable-feature.md
new file mode 100644
index 00000000000..d9e0b55e35b
--- /dev/null
+++ 
b/docs/zh-CN/docs/sql-manual/sql-reference/Database-Administration-Statements/enable-feature.md
@@ -0,0 +1,38 @@
+---
+{
+    "title": "ENABLE-FEATURE",
+    "language": "zh-CN"
+}
+---
+
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+## ENABLE-FEATURE
+
+### Description
+
+### Example
+
+### Keywords
+
+    ENABLE, FEATURE
+
+### Best Practice
+


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to