This is an automated email from the ASF dual-hosted git repository.

morningman pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/doris-website.git


The following commit(s) were added to refs/heads/master by this push:
     new 948498e0478 [fix](audit-log) fix upgrade info of audit log plugin 
(#1092)
948498e0478 is described below

commit 948498e0478fe318d67c852e99329e4f9fc04b22
Author: Mingyu Chen (Rayner) <morning...@163.com>
AuthorDate: Fri Nov 15 21:12:28 2024 +0800

    [fix](audit-log) fix upgrade info of audit log plugin (#1092)
    
    # Versions
    
    - [x] dev
    - [x] 3.0
    - [x] 2.1
    - [x] 2.0
    
    # Languages
    
    - [x] Chinese
    - [x] English
---
 docs/admin-manual/audit-plugin.md                  | 294 ++++++++-------------
 .../current/admin-manual/audit-plugin.md           | 230 +++++-----------
 .../version-2.0/admin-manual/audit-plugin.md       |  11 +-
 .../version-2.1/admin-manual/audit-plugin.md       | 230 +++++-----------
 .../version-3.0/admin-manual/audit-plugin.md       | 230 +++++-----------
 .../version-2.0/admin-manual/audit-plugin.md       |  12 +-
 .../version-2.1/admin-manual/audit-plugin.md       | 276 ++++++++-----------
 .../version-3.0/admin-manual/audit-plugin.md       | 294 ++++++++-------------
 8 files changed, 541 insertions(+), 1036 deletions(-)

diff --git a/docs/admin-manual/audit-plugin.md 
b/docs/admin-manual/audit-plugin.md
index e766379e2f2..90b98464034 100644
--- a/docs/admin-manual/audit-plugin.md
+++ b/docs/admin-manual/audit-plugin.md
@@ -1,6 +1,6 @@
 ---
 {
-    "title": "Using Audit Log Plugin",
+    "title": "Audit Log Plugin",
     "language": "en"
 }
 ---
@@ -24,34 +24,35 @@ specific language governing permissions and limitations
 under the License.
 -->
 
+This plugin can regularly import the audit logs of FE into a specified system 
table, making it convenient for users to view and analyze audit logs through 
SQL.
 
-Doris's audit log plugin was developed based on FE's plugin framework. Is an 
optional plugin. Users can install or uninstall this plugin at runtime.
+## Using the Audit Log Plugin
 
-This plugin can periodically import the FE audit log into the specified system 
table, so that users can easily view and analyze the audit log through SQL.
+Starting from Doris 2.1, the audit log plugin is integrated directly into the 
Doris kernel as a built-in plugin. Users do not need to install the plugin 
separately.
 
-## Use the audit log plug-in
+After the cluster starts, a system table named `audit_log` will be created 
under the `__internal_schema` database to store audit logs.
 
-Starting from Doris version 2.1, the audit log plug-in is directly integrated 
into the Doris as a built-in plug-in. Users do not need to install additional 
plug-ins.
+:::note
+For versions prior to Doris 2.1, please refer to the 2.0 version documentation.
+:::
 
-After the cluster is started, a system table named `audit_log` will be created 
under the `__internal_schema` database to store audit logs.
+:::warning
+After upgrading to version 2.1, the original audit log plugin will be 
unavailable. Refer to the **Audit Log Migration** section to see how to migrate 
audit log table data.
+:::
 
-> 1. If you upgrade from an old version, you can continue to use the previous 
plug-in. You can also uninstall the previous plug-in and use the new built-in 
plug-in. But note that the built-in plug-in will write the new audit log to a 
new table instead of the original audit log table.
->
-> 2. If it is a version before Doris 2.1, please refer to the following 
**Compilation, Configuration and Deployment** chapters.
+### Enabling the Plugin
 
-### Enable plug-in
-
-The audit log plug-in can be turned on or off at any time through the global 
variable `enable_audit_plugin` (the default is off), such as:
+The audit log plugin can be enabled or disabled at any time using the global 
variable `enable_audit_plugin` (default is disabled), for example:
 
 `set global enable_audit_plugin = true;`
 
-After it is enabled, Doris will write the audit log after it is enabled to the 
`audit_log` table.
+Once enabled, Doris will write the enabled audit logs to the `audit_log` table.
 
-The audit log plugin can be turned off at any time:
+The audit log plugin can be disabled at any time:
 
 `set global enable_audit_plugin = false;`
 
-After disable, Doris will stop writing to the `audit_log` table. Audit logs 
that have been written will not change.
+After disabling, Doris will stop writing to the `audit_log` table. The 
existing audit logs will not be affected.
 
 ### Audit log table
 
@@ -61,24 +62,24 @@ Starting from version 2.1.8 and 3.0.3, the `audit_log` 
system table will automat
 
 In previous versions, users need to manually add fields to the `audit_log` 
system table through the `ALTER TABLE` command.
 
-### Related configuration
+### Related Configurations
 
-The audit log table is a dynamic partitioned table, partitioned by day, and 
retains the data of the last 30 days by default.
+The audit log table is a dynamic partitioned table, partitioned by day, and 
retains data for the last 30 days by default.
 
-The following 3 global variables can control some writing behaviors of the 
audit log table:
+The following global variables can control some writing behaviors of the audit 
log table:
 
-- `audit_plugin_max_batch_interval_sec`: The maximum write interval for the 
audit log table. Default 60 seconds.
-- `audit_plugin_max_batch_bytes`: The maximum amount of data written in each 
batch of the audit log table. Default 50MB.
-- `audit_plugin_max_sql_length`: The maximum length of statements recorded in 
the audit log table. Default 4096.
-- `audit_plugin_load_timeout`: The default timeout of audit log load job. 
Default 600 seconds.
+- `audit_plugin_max_batch_interval_sec`: The maximum write interval for the 
audit log table. Default is 60 seconds.
+- `audit_plugin_max_batch_bytes`: The maximum amount of data written per batch 
to the audit log table. Default is 50MB.
+- `audit_plugin_max_sql_length`: The maximum length of statements recorded in 
the audit log table. Default is 4096.
+- `audit_plugin_load_timeout`: Default timeout for the audit log import job. 
Default is 600 seconds.
 
-Can be set via `set global xxx=yyy`.
+These can be set using `set global xxx=yyy`.
 
-FE configurations:
+FE Configuration:
 
-- `skip_audit_user_list` (Since 3.0.1)
+- `skip_audit_user_list` (supported since 3.0.1)
 
-    If you do not want certain users' operations to be recorded in the audit 
log, you can modify this configuration.
+    If you do not want the operations of certain users to be recorded in the 
audit log, you can modify this configuration.
 
     ```
     skip_audit_user_list=root
@@ -86,178 +87,93 @@ FE configurations:
     skip_audit_user_list=user1,user2
     ```
 
-## Compilation, Configuration and Deployment
-
-### FE Configuration
-
-The audit log plug-in framework is enabled by default in Doris and is 
controlled by the FE configuration `plugin_enable`
-
-### AuditLoader Configuration
-
-1. Download the Audit Loader plugin
-
-   The Audit Loader plug-in is provided by default in the Doris distribution. 
After downloading the Doris installation package through 
[DOWNLOAD](https://doris.apache.org/download), decompress it and enter the 
directory, you can find the auditloader.zip file in the extensionsaudit_loader 
subdirectory.
-
-2. Unzip the installation package
-
-    ```shell
-    unzip auditloader.zip
+## Audit Log Migration
+
+After upgrading to version 2.1, the original audit log plugin will be 
unavailable. This section explains how to migrate data from the original audit 
log table to the new audit log table.
+
+1. Confirm the field information of the old and new audit log tables.
+
+    The default audit log table should be 
`doris_audit_db__`.`doris_audit_log_tbl__`.
+    
+    The new audit log table is `__internal_schema`.`audit_log`.
+    
+    You can check if the field information of the two tables matches by using 
the `DESC table_name` command. Typically, the fields of the old table should be 
a subset of the new table.
+
+2. Migrate Audit Log Table Data
+
+    You can use the following statement to migrate data from the original 
table to the new table:
+    
+    ```sql
+    INSERT INTO __internal_schema.audit_log (
+        query_id         ,
+        time             ,
+        client_ip        ,
+        user             ,
+        db               ,
+        state            ,
+        error_code       ,
+        error_message    ,
+        query_time       ,
+        scan_bytes       ,
+        scan_rows        ,
+        return_rows      ,
+        stmt_id          ,
+        is_query         ,
+        frontend_ip      ,
+        cpu_time_ms      ,
+        sql_hash         ,
+        sql_digest       ,
+        peak_memory_bytes,
+        stmt
+        )
+        SELECT
+        query_id         ,
+        time             ,
+        client_ip        ,
+        user             ,
+        db               ,
+        state            ,
+        error_code       ,
+        error_message    ,
+        query_time       ,
+        scan_bytes       ,
+        scan_rows        ,
+        return_rows      ,
+        stmt_id          ,
+        is_query         ,
+        frontend_ip      ,
+        cpu_time_ms      ,
+        sql_hash         ,
+        sql_digest       ,
+        peak_memory_bytes,
+        stmt
+        FROM doris_audit_db__.doris_audit_log_tbl__;
     ```
 
-    Unzip and generate the following files:
-
-    * auditloader.jar: plug-in code package.
-    * plugin.properties: plugin properties file.
-    * plugin.conf: plugin configuration file.
-
-You can place this file on an http download server or copy(or unzip) it to the 
specified directory of all FEs. Here we use the latter.  
-The installation of this plugin can be found in 
[INSTALL](../sql-manual/sql-statements/Database-Administration-Statements/INSTALL-PLUGIN.md)
  
-After executing install, the AuditLoader directory will be automatically 
generated.
+3. Remove Original Plugin
 
-3. Modify plugin.conf
-
-   The following configurations are available for modification:
-
-    * frontend_host_port: FE node IP address and HTTP port in the format 
<fe_ip>:<fe_http_port>. The default value is 127.0.0.1:8030.
-    * database: Audit log database name.
-    * audit_log_table: Audit log table name.
-    * slow_log_table: Slow query log table name.
-    * enable_slow_log: Whether to enable the slow query log import function. 
The default value is false. You can set the slow query threshold in the FE 
configuration item. The parameter is qe_slow_log_ms and the default value is 5s.
-    * user: Cluster username. The user must have INSERT permission on the 
corresponding table.
-    * password: Cluster user password.
-
-4. Repackaging the Audit Loader plugin
-
-    ```shell
-    zip -r -q -m auditloader.zip auditloader.jar plugin.properties plugin.conf
-    ```
-
-### Create Audit Table
-
-In Doris, you need to create the library and table of the audit log. The table 
structure is as follows:
-
-If you need to enable the slow query log import function, you need to create 
an additional slow table `doris_slow_log_tbl__`, whose table structure is 
consistent with `doris_audit_log_tbl__`.
-
-Among them, the `dynamic_partition` attribute selects the number of days for 
audit log retention according to your own needs.
-
-```sql
-create database doris_audit_db__;
-
-create table doris_audit_db__.doris_audit_log_tbl__
-(
-    query_id varchar(48) comment "Unique query id",
-    `time` datetime not null comment "Query start time",
-    client_ip varchar(32) comment "Client IP",
-    user varchar(64) comment "User name",
-    db varchar(96) comment "Database of this query",
-    state varchar(8) comment "Query result state. EOF, ERR, OK",
-    error_code int comment "Error code of failing query.",
-    error_message string comment "Error message of failing query.",
-    query_time bigint comment "Query execution time in millisecond",
-    scan_bytes bigint comment "Total scan bytes of this query",
-    scan_rows bigint comment "Total scan rows of this query",
-    return_rows bigint comment "Returned rows of this query",
-    stmt_id int comment "An incremental id of statement",
-    is_query tinyint comment "Is this statemt a query. 1 or 0",
-    frontend_ip varchar(32) comment "Frontend ip of executing this statement",
-    cpu_time_ms bigint comment "Total scan cpu time in millisecond of this 
query",
-    sql_hash varchar(48) comment "Hash value for this query",
-    sql_digest varchar(48) comment "Sql digest of this query, will be empty if 
not a slow query",
-    peak_memory_bytes bigint comment "Peak memory bytes used on all backends 
of this query",
-    stmt string comment "The original statement, trimed if longer than 2G"
-) engine=OLAP
-duplicate key(query_id, `time`, client_ip)
-partition by range(`time`) ()
-distributed by hash(query_id) buckets 1
-properties(
-    "dynamic_partition.time_unit" = "DAY",
-    "dynamic_partition.start" = "-30",
-    "dynamic_partition.end" = "3",
-    "dynamic_partition.prefix" = "p",
-    "dynamic_partition.buckets" = "1",
-    "dynamic_partition.enable" = "true",
-    "replication_num" = "3"
-);
-
-create table doris_audit_db__.doris_slow_log_tbl__
-(
-    query_id varchar(48) comment "Unique query id",
-    `time` datetime not null comment "Query start time",
-    client_ip varchar(32) comment "Client IP",
-    user varchar(64) comment "User name",
-    db varchar(96) comment "Database of this query",
-    state varchar(8) comment "Query result state. EOF, ERR, OK",
-    error_code int comment "Error code of failing query.",
-    error_message string comment "Error message of failing query.",
-    query_time bigint comment "Query execution time in millisecond",
-    scan_bytes bigint comment "Total scan bytes of this query",
-    scan_rows bigint comment "Total scan rows of this query",
-    return_rows bigint comment "Returned rows of this query",
-    stmt_id int comment "An incremental id of statement",
-    is_query tinyint comment "Is this statemt a query. 1 or 0",
-    frontend_ip varchar(32) comment "Frontend ip of executing this statement",
-    cpu_time_ms bigint comment "Total scan cpu time in millisecond of this 
query",
-    sql_hash varchar(48) comment "Hash value for this query",
-    sql_digest varchar(48) comment "Sql digest of a slow query",
-    peak_memory_bytes bigint comment "Peak memory bytes used on all backends 
of this query",
-    stmt string comment "The original statement, trimed if longer than 2G "
-) engine=OLAP
-duplicate key(query_id, `time`, client_ip)
-partition by range(`time`) ()
-distributed by hash(query_id) buckets 1
-properties(
-    "dynamic_partition.time_unit" = "DAY",
-    "dynamic_partition.start" = "-30",
-    "dynamic_partition.end" = "3",
-    "dynamic_partition.prefix" = "p",
-    "dynamic_partition.buckets" = "1",
-    "dynamic_partition.enable" = "true",
-    "replication_num" = "3"
-);
-```
-
->**Notice**
->
-> In the above table structure: stmt string, this can only be used in 0.15 and 
later versions, in previous versions, the field type used varchar
-
-### Deployment
-
-You can place the packaged auditloader.zip on an http server, or copy 
`auditloader.zip` to the same specified directory in all FEs.
-
-### Installation
-
-Install the audit loader plugin:
-
-```sql
-INSTALL PLUGIN FROM [source] [PROPERTIES ("key"="value", ...)]
-```
-
-Detailed command reference: 
[INSTALL-PLUGIN.md](../sql-manual/sql-statements/Database-Administration-Statements/INSTALL-PLUGIN)
-
-After successful installation, you can see the installed plug-ins through 
`SHOW PLUGINS`, and the status is `INSTALLED`.
-
-After completion, the plugin will continuously insert audit logs into this 
table at specified intervals.
+    After migration, you can remove the original plugin by using the 
`UNINSTALL PLUGIN AuditLoader;` command.
 
 ## FAQ
 
-1. There is no data in the audit log table, or no new data is imported after 
running for a period of time.
-
-     You can check by following these steps:
-
-     - Check whether the partition was created normally
-
-         The audit log table is a dynamic partition table, partitioned by day. 
By default, partitions for the next 3 days will be created and partitions for 
the past 30 days will be retained. Only after the partition is created 
correctly can the audit log be imported normally.
-
-         You can use `show dynamic partition tables from __internal_schema` to 
check the scheduling of dynamic partitions and troubleshoot according to the 
cause of the error. Possible reasons for the error include:
+1. No data in the audit log table, or no new data is being ingested after 
running for a period of time
 
-         - The number of BE nodes is less than the required number of 
replicas: the audit log table has 3 replicas by default, so at least 3 BE nodes 
are required. Or modify the number of replicas through the `alter table` 
statement, such as:
+    You can troubleshoot by following these steps:
+    
+    - Check if partitions are created correctly
 
-             `alter table __internal_schema.audit_log set 
("dynamic_partition.replication_num" = "2")`
+        The audit log table is a dynamic partition table partitioned by day. 
By default, it creates partitions for the next 3 days and retains historical 
partitions for 30 days. Data can only be written to the audit log if partitions 
are created correctly.
 
-         - No suitable storage medium: You can view the `storage_medium` 
attribute through `show create table __internal_schema.audit_log`. If BE does 
not have a corresponding storage medium, the partition creation may fail.
+        You can check the scheduling status of dynamic partitions by using 
`show dynamic partition tables from __internal_schema` and troubleshoot based 
on error reasons. Possible error reasons may include:
 
-         - No suitable resource group: The audit log table defaults to the 
`default` resource group. You can use the `show backends` command to check 
whether the resource has sufficient node resources.
+        - Number of nodes is less than the required replication number: The 
audit log table defaults to 3 replicas, so at least 3 BE nodes are required. 
You can modify the replication number using the `alter table` statement, for 
example:
+        
+            `alter table __internal_schema.audit_log set 
("dynamic_partition.replication_num" = "2")`
+        
+        - No suitable storage medium: You can check the `storage_medium` 
property by using `show create table __internal_schema.audit_log`. If there is 
no corresponding storage medium on the BE, partition creation may fail.
+        
+        - No suitable resource group: The audit log table defaults to the 
default resource group. You can check if the resource group has enough node 
resources by using the `show backends` command.
 
-     - Search for the word `AuditLoad` in Master FE's `fe.log` to see if there 
are related error logs
+    - Search for `AuditLoad` in the `fe.log` on the Master FE to see if there 
are any related error logs
 
-         The audit log is imported into the table through the internal stream 
load operation. There may be problems with the import process. These problems 
will print error logs in `fe.log`.
+        The audit log is imported into the table through internal stream load 
operations. If there are issues with the import process, error logs will be 
printed in the `fe.log`.
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/admin-manual/audit-plugin.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/admin-manual/audit-plugin.md
index 8f1e109b3dc..1700b35eeb5 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/admin-manual/audit-plugin.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/admin-manual/audit-plugin.md
@@ -24,10 +24,6 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# 审计日志插件
-
-Doris 的审计日志插件是在 FE 的插件框架基础上开发的。是一个可选插件。用户可以在运行时安装或卸载这个插件。
-
 该插件可以将 FE 的审计日志定期的导入到指定的系统表中,以方便用户通过 SQL 对审计日志进行查看和分析。
 
 ## 使用审计日志插件
@@ -36,9 +32,13 @@ Doris 的审计日志插件是在 FE 的插件框架基础上开发的。是一
 
 集群启动后,会在 `__internal_schema` 库下创建名为 `audit_log` 的系统表,用于存储审计日志。
 
-> 1. 
如果是从老版本升级上来的用户,可以继续使用之前的方式。也可以卸载之前的插件,使用内置插件。但注意内置插件会将新的审计日志写入到新的表中,而不是原有的审计日志表中。
-> 
-> 2. 如果是 Doris 2.1 之前的版本,请参阅之后的 **编译、配置和部署** 章节。
+:::note
+如果是 Doris 2.1 之前的版本,请参阅 2.0 版本文档。
+:::
+
+:::warning
+升级到 2.1 版本后,原有的审计日志插件将不可用。请参阅 **审计日志迁移** 章节查看如何迁移审计日志表数据。
+:::
 
 ### 开启插件
 
@@ -66,7 +66,7 @@ Doris 的审计日志插件是在 FE 的插件框架基础上开发的。是一
 
 审计日志表是一张动态分区表,按天分区,默认保留最近 30 天的数据。
 
-以下 3 个全局变量可以控制审计日志表的一些写入行为:
+以下全局变量可以控制审计日志表的一些写入行为:
 
 - `audit_plugin_max_batch_interval_sec`:审计日志表的最大写入间隔。默认 60 秒。
 - `audit_plugin_max_batch_bytes`:审计日志表每批次最大写入数据量。默认 50MB。
@@ -87,159 +87,72 @@ FE 配置项:
     skip_audit_user_list=user1,user2
     ```
 
-## 编译、配置和部署
-
-Doris 2.1 版本之前的用户,请参阅如下方式使用审计日志插件。
-
-### FE 配置
-
-审计日志插件框架在 Doris 中是默认开启的的,由 FE 的配置 `plugin_enable` 控制
-
-### AuditLoader 配置
-
-1. 下载 Audit Loader 插件
-
-    Audit Loader 插件在 Doris 的发行版中默认提供,通过 
[DOWNLOAD](https://doris.apache.org/zh-CN/download) 下载 Doris 安装包解压并进入目录后即可在 
extensions/audit_loader 子目录下找到 auditloader.zip 文件。
-
-2. 解压安装包
-
-    ```shell
-    unzip auditloader.zip
+## 审计日志迁移
+
+升级到 2.1 版本后,原有的审计日志插件将不可用。本小节介绍如何将原有审计日志表中的数据迁移到新的审计日志表中。
+
+1. 确认新旧审计日志表的字段信息
+
+    原有审计日志表默认情况下应为:`doris_audit_db__`.`doris_audit_log_tbl__`。
+
+    新的审计日志表为:`__internal_schema`.`audit_log`
+
+    可以通过 `DESC table_name` 命令查看两种表的字段信息是否匹配。通常情况下,旧表的字段应为新表的子集。
+
+2. 迁移审计日志表数据。
+
+    可以使用如下语句将原表中数据迁移到新表中:
+
+    ```sql
+    INSERT INTO __internal_schema.audit_log (
+    query_id         ,
+    time             ,
+    client_ip        ,
+    user             ,
+    db               ,
+    state            ,
+    error_code       ,
+    error_message    ,
+    query_time       ,
+    scan_bytes       ,
+    scan_rows        ,
+    return_rows      ,
+    stmt_id          ,
+    is_query         ,
+    frontend_ip      ,
+    cpu_time_ms      ,
+    sql_hash         ,
+    sql_digest       ,
+    peak_memory_bytes,
+    stmt
+    )
+    SELECT
+    query_id         ,
+    time             ,
+    client_ip        ,
+    user             ,
+    db               ,
+    state            ,
+    error_code       ,
+    error_message    ,
+    query_time       ,
+    scan_bytes       ,
+    scan_rows        ,
+    return_rows      ,
+    stmt_id          ,
+    is_query         ,
+    frontend_ip      ,
+    cpu_time_ms      ,
+    sql_hash         ,
+    sql_digest       ,
+    peak_memory_bytes,
+    stmt
+    FROM doris_audit_db__.doris_audit_log_tbl__;
     ```
 
-    解压生成以下文件:
-
-    * auditloader.jar:插件代码包。
-    * plugin.properties:插件属性文件。
-    * plugin.conf:插件配置文件。
-
-您可以将这个文件放置在一个 http 服务器上,或者拷贝`auditloader.zip`(或者解压`auditloader.zip`) 到所有 FE 
的指定目录下。这里我们使用后者。  
-该插件的安装可以参阅 
[INSTALL](../sql-manual/sql-statements/Database-Administration-Statements/INSTALL-PLUGIN.md)
  
-执行 install 后会自动生成 AuditLoader 目录
-
-3. 修改 plugin.conf 
-
-    以下配置可供修改:
+3. 删除原有插件
 
-    * frontend_host_port:FE 节点 IP 地址和 HTTP 端口,格式为 <fe_ip>:<fe_http_port>。默认值为 
127.0.0.1:8030。
-    * database:审计日志库名。
-    * audit_log_table:审计日志表名。
-    * slow_log_table:慢查询日志表名。
-    * enable_slow_log:是否开启慢查询日志导入功能。默认值为 false。可以在 FE 配置项中配置慢查询的阈值,参数为 
qe_slow_log_ms,默认 5s。
-    * user:集群用户名。该用户必须具有对应表的 INSERT 权限。
-    * password:集群用户密码。
-
-4. 重新打包 Audit Loader 插件
-
-    ```shell
-    zip -r -q -m auditloader.zip auditloader.jar plugin.properties plugin.conf
-    ```
-
-### 创建库表
-
-在 Doris 中,需要创建审计日志的库和表,表结构如下:
-
-若需开启慢查询日志导入功能,还需要额外创建慢表 `doris_slow_log_tbl__`,其表结构与 `doris_audit_log_tbl__` 
一致。
-
-其中 `dynamic_partition` 属性根据自己的需要,选择审计日志保留的天数。
-
-```sql
-create database doris_audit_db__;
-
-create table doris_audit_db__.doris_audit_log_tbl__
-(
-    query_id varchar(48) comment "Unique query id",
-    `time` datetime not null comment "Query start time",
-    client_ip varchar(32) comment "Client IP",
-    user varchar(64) comment "User name",
-    db varchar(96) comment "Database of this query",
-    state varchar(8) comment "Query result state. EOF, ERR, OK",
-    error_code int comment "Error code of failing query.",
-    error_message string comment "Error message of failing query.",
-    query_time bigint comment "Query execution time in millisecond",
-    scan_bytes bigint comment "Total scan bytes of this query",
-    scan_rows bigint comment "Total scan rows of this query",
-    return_rows bigint comment "Returned rows of this query",
-    stmt_id int comment "An incremental id of statement",
-    is_query tinyint comment "Is this statemt a query. 1 or 0",
-    frontend_ip varchar(32) comment "Frontend ip of executing this statement",
-    cpu_time_ms bigint comment "Total scan cpu time in millisecond of this 
query",
-    sql_hash varchar(48) comment "Hash value for this query",
-    sql_digest varchar(48) comment "Sql digest of this query, will be empty if 
not a slow query",
-    peak_memory_bytes bigint comment "Peak memory bytes used on all backends 
of this query",
-    stmt string comment "The original statement, trimed if longer than 2G"
-) engine=OLAP
-duplicate key(query_id, `time`, client_ip)
-partition by range(`time`) ()
-distributed by hash(query_id) buckets 1
-properties(
-    "dynamic_partition.time_unit" = "DAY",
-    "dynamic_partition.start" = "-30",
-    "dynamic_partition.end" = "3",
-    "dynamic_partition.prefix" = "p",
-    "dynamic_partition.buckets" = "1",
-    "dynamic_partition.enable" = "true",
-    "replication_num" = "3"
-);
-
-create table doris_audit_db__.doris_slow_log_tbl__
-(
-    query_id varchar(48) comment "Unique query id",
-    `time` datetime not null comment "Query start time",
-    client_ip varchar(32) comment "Client IP",
-    user varchar(64) comment "User name",
-    db varchar(96) comment "Database of this query",
-    state varchar(8) comment "Query result state. EOF, ERR, OK",
-    error_code int comment "Error code of failing query.",
-    error_message string comment "Error message of failing query.",
-    query_time bigint comment "Query execution time in millisecond",
-    scan_bytes bigint comment "Total scan bytes of this query",
-    scan_rows bigint comment "Total scan rows of this query",
-    return_rows bigint comment "Returned rows of this query",
-    stmt_id int comment "An incremental id of statement",
-    is_query tinyint comment "Is this statemt a query. 1 or 0",
-    frontend_ip varchar(32) comment "Frontend ip of executing this statement",
-    cpu_time_ms bigint comment "Total scan cpu time in millisecond of this 
query",
-    sql_hash varchar(48) comment "Hash value for this query",
-    sql_digest varchar(48) comment "Sql digest of a slow query",
-    peak_memory_bytes bigint comment "Peak memory bytes used on all backends 
of this query",
-    stmt string comment "The original statement, trimed if longer than 2G "
-) engine=OLAP
-duplicate key(query_id, `time`, client_ip)
-partition by range(`time`) ()
-distributed by hash(query_id) buckets 1
-properties(
-    "dynamic_partition.time_unit" = "DAY",
-    "dynamic_partition.start" = "-30",
-    "dynamic_partition.end" = "3",
-    "dynamic_partition.prefix" = "p",
-    "dynamic_partition.buckets" = "1",
-    "dynamic_partition.enable" = "true",
-    "replication_num" = "3"
-);
-```
-
->**注意**
->
-> 上面表结构中:stmt string,这个只能在 0.15 及之后版本中使用,之前版本,字段类型使用 varchar
-
-### 部署
-
-您可以将打包好的 auditloader.zip 放置在一个 http 服务器上,或者拷贝 `auditloader.zip` 到所有 FE 
的相同指定目录下。
-
-### 安装
-
-通过以下语句安装 Audit Loader 插件:
-
-```sql
-INSTALL PLUGIN FROM [source] [PROPERTIES ("key"="value", ...)]
-```
-
-详细命令参考:[INSTALL-PLUGIN](../sql-manual/sql-statements/Database-Administration-Statements/INSTALL-PLUGIN)
-
-安装成功后,可以通过 `SHOW PLUGINS` 看到已经安装的插件,并且状态为 `INSTALLED`。
-
-完成后,插件会不断的以指定的时间间隔将审计日志插入到这个表中。
+    迁移后,可以通过 `UNINSTALL PLUGIN AuditLoader;` 命令删除原有插件即可。
 
 ## FAQ
 
@@ -264,3 +177,4 @@ INSTALL PLUGIN FROM [source] [PROPERTIES ("key"="value", 
...)]
     - 在 Master FE 的 `fe.log` 中搜索 `AuditLoad` 字样,查看是否有相关错误日志
 
         审计日志是通过内部的 stream load 操作导入到表中的,有可能是导入流程出现了问题,这些问题会在 `fe.log` 中打印错误日志。
+
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.0/admin-manual/audit-plugin.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.0/admin-manual/audit-plugin.md
index ce0e2d7f70c..ec5f60951bf 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.0/admin-manual/audit-plugin.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.0/admin-manual/audit-plugin.md
@@ -1,6 +1,6 @@
 ---
 {
-    "title": "审计日志",
+    "title": "审计日志插件",
     "language": "zh-CN"
 }
 ---
@@ -24,8 +24,6 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# 审计日志插件
-
 Doris 的审计日志插件是在 FE 的插件框架基础上开发的。是一个可选插件。用户可以在运行时安装或卸载这个插件。
 
 该插件可以将 FE 的审计日志定期的导入到指定 Doris 集群中,以方便用户通过 SQL 对审计日志进行查看和分析。
@@ -171,9 +169,7 @@ properties(
 ```
 
 :::caution
-**注意**
-
-- 上面表结构中:stmt string,这个只能在 0.15 及之后版本中使用,之前版本,字段类型使用 varchar
+上面表结构中:stmt string,这个只能在 0.15 及之后版本中使用,之前版本,字段类型使用 varchar
 :::
 
 ### 部署
@@ -188,8 +184,9 @@ properties(
 INSTALL PLUGIN FROM [source] [PROPERTIES ("key"="value", ...)]
 ```
 
-详细命令参考:[INSTALL-PLUGIN.md](../sql-manual/sql-reference/Database-Administration-Statements/INSTALL-PLUGIN)
+详细命令参考:[INSTALL](../sql-manual/sql-reference/Database-Administration-Statements/INSTALL-PLUGIN.md)
 
 安装成功后,可以通过 `SHOW PLUGINS` 看到已经安装的插件,并且状态为 `INSTALLED`。
 
 完成后,插件会不断的以指定的时间间隔将审计日志插入到这个表中。
+
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/admin-manual/audit-plugin.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/admin-manual/audit-plugin.md
index 8f1e109b3dc..9e8e2d8c907 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/admin-manual/audit-plugin.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/admin-manual/audit-plugin.md
@@ -24,10 +24,6 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# 审计日志插件
-
-Doris 的审计日志插件是在 FE 的插件框架基础上开发的。是一个可选插件。用户可以在运行时安装或卸载这个插件。
-
 该插件可以将 FE 的审计日志定期的导入到指定的系统表中,以方便用户通过 SQL 对审计日志进行查看和分析。
 
 ## 使用审计日志插件
@@ -36,9 +32,13 @@ Doris 的审计日志插件是在 FE 的插件框架基础上开发的。是一
 
 集群启动后,会在 `__internal_schema` 库下创建名为 `audit_log` 的系统表,用于存储审计日志。
 
-> 1. 
如果是从老版本升级上来的用户,可以继续使用之前的方式。也可以卸载之前的插件,使用内置插件。但注意内置插件会将新的审计日志写入到新的表中,而不是原有的审计日志表中。
-> 
-> 2. 如果是 Doris 2.1 之前的版本,请参阅之后的 **编译、配置和部署** 章节。
+:::note
+如果是 Doris 2.1 之前的版本,请 2.0 版本文档。
+:::
+
+:::warning
+升级到 2.1 版本后,原有的审计日志插件将不可用。请参阅 **审计日志迁移** 章节查看如何迁移审计日志表数据。
+:::
 
 ### 开启插件
 
@@ -66,7 +66,7 @@ Doris 的审计日志插件是在 FE 的插件框架基础上开发的。是一
 
 审计日志表是一张动态分区表,按天分区,默认保留最近 30 天的数据。
 
-以下 3 个全局变量可以控制审计日志表的一些写入行为:
+以下全局变量可以控制审计日志表的一些写入行为:
 
 - `audit_plugin_max_batch_interval_sec`:审计日志表的最大写入间隔。默认 60 秒。
 - `audit_plugin_max_batch_bytes`:审计日志表每批次最大写入数据量。默认 50MB。
@@ -87,159 +87,72 @@ FE 配置项:
     skip_audit_user_list=user1,user2
     ```
 
-## 编译、配置和部署
-
-Doris 2.1 版本之前的用户,请参阅如下方式使用审计日志插件。
-
-### FE 配置
-
-审计日志插件框架在 Doris 中是默认开启的的,由 FE 的配置 `plugin_enable` 控制
-
-### AuditLoader 配置
-
-1. 下载 Audit Loader 插件
-
-    Audit Loader 插件在 Doris 的发行版中默认提供,通过 
[DOWNLOAD](https://doris.apache.org/zh-CN/download) 下载 Doris 安装包解压并进入目录后即可在 
extensions/audit_loader 子目录下找到 auditloader.zip 文件。
-
-2. 解压安装包
-
-    ```shell
-    unzip auditloader.zip
+## 审计日志迁移
+
+升级到 2.1 版本后,原有的审计日志插件将不可用。本小节介绍如何将原有审计日志表中的数据迁移到新的审计日志表中。
+
+1. 确认新旧审计日志表的字段信息
+
+    原有审计日志表默认情况下应为:`doris_audit_db__`.`doris_audit_log_tbl__`。
+
+    新的审计日志表为:`__internal_schema`.`audit_log`
+
+    可以通过 `DESC table_name` 命令查看两种表的字段信息是否匹配。通常情况下,旧表的字段应为新表的子集。
+
+2. 迁移审计日志表数据。
+
+    可以使用如下语句将原表中数据迁移到新表中:
+
+    ```sql
+    INSERT INTO __internal_schema.audit_log (
+    query_id         ,
+    time             ,
+    client_ip        ,
+    user             ,
+    db               ,
+    state            ,
+    error_code       ,
+    error_message    ,
+    query_time       ,
+    scan_bytes       ,
+    scan_rows        ,
+    return_rows      ,
+    stmt_id          ,
+    is_query         ,
+    frontend_ip      ,
+    cpu_time_ms      ,
+    sql_hash         ,
+    sql_digest       ,
+    peak_memory_bytes,
+    stmt
+    )
+    SELECT
+    query_id         ,
+    time             ,
+    client_ip        ,
+    user             ,
+    db               ,
+    state            ,
+    error_code       ,
+    error_message    ,
+    query_time       ,
+    scan_bytes       ,
+    scan_rows        ,
+    return_rows      ,
+    stmt_id          ,
+    is_query         ,
+    frontend_ip      ,
+    cpu_time_ms      ,
+    sql_hash         ,
+    sql_digest       ,
+    peak_memory_bytes,
+    stmt
+    FROM doris_audit_db__.doris_audit_log_tbl__;
     ```
 
-    解压生成以下文件:
-
-    * auditloader.jar:插件代码包。
-    * plugin.properties:插件属性文件。
-    * plugin.conf:插件配置文件。
-
-您可以将这个文件放置在一个 http 服务器上,或者拷贝`auditloader.zip`(或者解压`auditloader.zip`) 到所有 FE 
的指定目录下。这里我们使用后者。  
-该插件的安装可以参阅 
[INSTALL](../sql-manual/sql-statements/Database-Administration-Statements/INSTALL-PLUGIN.md)
  
-执行 install 后会自动生成 AuditLoader 目录
-
-3. 修改 plugin.conf 
-
-    以下配置可供修改:
+3. 删除原有插件
 
-    * frontend_host_port:FE 节点 IP 地址和 HTTP 端口,格式为 <fe_ip>:<fe_http_port>。默认值为 
127.0.0.1:8030。
-    * database:审计日志库名。
-    * audit_log_table:审计日志表名。
-    * slow_log_table:慢查询日志表名。
-    * enable_slow_log:是否开启慢查询日志导入功能。默认值为 false。可以在 FE 配置项中配置慢查询的阈值,参数为 
qe_slow_log_ms,默认 5s。
-    * user:集群用户名。该用户必须具有对应表的 INSERT 权限。
-    * password:集群用户密码。
-
-4. 重新打包 Audit Loader 插件
-
-    ```shell
-    zip -r -q -m auditloader.zip auditloader.jar plugin.properties plugin.conf
-    ```
-
-### 创建库表
-
-在 Doris 中,需要创建审计日志的库和表,表结构如下:
-
-若需开启慢查询日志导入功能,还需要额外创建慢表 `doris_slow_log_tbl__`,其表结构与 `doris_audit_log_tbl__` 
一致。
-
-其中 `dynamic_partition` 属性根据自己的需要,选择审计日志保留的天数。
-
-```sql
-create database doris_audit_db__;
-
-create table doris_audit_db__.doris_audit_log_tbl__
-(
-    query_id varchar(48) comment "Unique query id",
-    `time` datetime not null comment "Query start time",
-    client_ip varchar(32) comment "Client IP",
-    user varchar(64) comment "User name",
-    db varchar(96) comment "Database of this query",
-    state varchar(8) comment "Query result state. EOF, ERR, OK",
-    error_code int comment "Error code of failing query.",
-    error_message string comment "Error message of failing query.",
-    query_time bigint comment "Query execution time in millisecond",
-    scan_bytes bigint comment "Total scan bytes of this query",
-    scan_rows bigint comment "Total scan rows of this query",
-    return_rows bigint comment "Returned rows of this query",
-    stmt_id int comment "An incremental id of statement",
-    is_query tinyint comment "Is this statemt a query. 1 or 0",
-    frontend_ip varchar(32) comment "Frontend ip of executing this statement",
-    cpu_time_ms bigint comment "Total scan cpu time in millisecond of this 
query",
-    sql_hash varchar(48) comment "Hash value for this query",
-    sql_digest varchar(48) comment "Sql digest of this query, will be empty if 
not a slow query",
-    peak_memory_bytes bigint comment "Peak memory bytes used on all backends 
of this query",
-    stmt string comment "The original statement, trimed if longer than 2G"
-) engine=OLAP
-duplicate key(query_id, `time`, client_ip)
-partition by range(`time`) ()
-distributed by hash(query_id) buckets 1
-properties(
-    "dynamic_partition.time_unit" = "DAY",
-    "dynamic_partition.start" = "-30",
-    "dynamic_partition.end" = "3",
-    "dynamic_partition.prefix" = "p",
-    "dynamic_partition.buckets" = "1",
-    "dynamic_partition.enable" = "true",
-    "replication_num" = "3"
-);
-
-create table doris_audit_db__.doris_slow_log_tbl__
-(
-    query_id varchar(48) comment "Unique query id",
-    `time` datetime not null comment "Query start time",
-    client_ip varchar(32) comment "Client IP",
-    user varchar(64) comment "User name",
-    db varchar(96) comment "Database of this query",
-    state varchar(8) comment "Query result state. EOF, ERR, OK",
-    error_code int comment "Error code of failing query.",
-    error_message string comment "Error message of failing query.",
-    query_time bigint comment "Query execution time in millisecond",
-    scan_bytes bigint comment "Total scan bytes of this query",
-    scan_rows bigint comment "Total scan rows of this query",
-    return_rows bigint comment "Returned rows of this query",
-    stmt_id int comment "An incremental id of statement",
-    is_query tinyint comment "Is this statemt a query. 1 or 0",
-    frontend_ip varchar(32) comment "Frontend ip of executing this statement",
-    cpu_time_ms bigint comment "Total scan cpu time in millisecond of this 
query",
-    sql_hash varchar(48) comment "Hash value for this query",
-    sql_digest varchar(48) comment "Sql digest of a slow query",
-    peak_memory_bytes bigint comment "Peak memory bytes used on all backends 
of this query",
-    stmt string comment "The original statement, trimed if longer than 2G "
-) engine=OLAP
-duplicate key(query_id, `time`, client_ip)
-partition by range(`time`) ()
-distributed by hash(query_id) buckets 1
-properties(
-    "dynamic_partition.time_unit" = "DAY",
-    "dynamic_partition.start" = "-30",
-    "dynamic_partition.end" = "3",
-    "dynamic_partition.prefix" = "p",
-    "dynamic_partition.buckets" = "1",
-    "dynamic_partition.enable" = "true",
-    "replication_num" = "3"
-);
-```
-
->**注意**
->
-> 上面表结构中:stmt string,这个只能在 0.15 及之后版本中使用,之前版本,字段类型使用 varchar
-
-### 部署
-
-您可以将打包好的 auditloader.zip 放置在一个 http 服务器上,或者拷贝 `auditloader.zip` 到所有 FE 
的相同指定目录下。
-
-### 安装
-
-通过以下语句安装 Audit Loader 插件:
-
-```sql
-INSTALL PLUGIN FROM [source] [PROPERTIES ("key"="value", ...)]
-```
-
-详细命令参考:[INSTALL-PLUGIN](../sql-manual/sql-statements/Database-Administration-Statements/INSTALL-PLUGIN)
-
-安装成功后,可以通过 `SHOW PLUGINS` 看到已经安装的插件,并且状态为 `INSTALLED`。
-
-完成后,插件会不断的以指定的时间间隔将审计日志插入到这个表中。
+    迁移后,可以通过 `UNINSTALL PLUGIN AuditLoader;` 命令删除原有插件即可。
 
 ## FAQ
 
@@ -264,3 +177,4 @@ INSTALL PLUGIN FROM [source] [PROPERTIES ("key"="value", 
...)]
     - 在 Master FE 的 `fe.log` 中搜索 `AuditLoad` 字样,查看是否有相关错误日志
 
         审计日志是通过内部的 stream load 操作导入到表中的,有可能是导入流程出现了问题,这些问题会在 `fe.log` 中打印错误日志。
+
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/admin-manual/audit-plugin.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/admin-manual/audit-plugin.md
index 8f1e109b3dc..9e8e2d8c907 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/admin-manual/audit-plugin.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/admin-manual/audit-plugin.md
@@ -24,10 +24,6 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# 审计日志插件
-
-Doris 的审计日志插件是在 FE 的插件框架基础上开发的。是一个可选插件。用户可以在运行时安装或卸载这个插件。
-
 该插件可以将 FE 的审计日志定期的导入到指定的系统表中,以方便用户通过 SQL 对审计日志进行查看和分析。
 
 ## 使用审计日志插件
@@ -36,9 +32,13 @@ Doris 的审计日志插件是在 FE 的插件框架基础上开发的。是一
 
 集群启动后,会在 `__internal_schema` 库下创建名为 `audit_log` 的系统表,用于存储审计日志。
 
-> 1. 
如果是从老版本升级上来的用户,可以继续使用之前的方式。也可以卸载之前的插件,使用内置插件。但注意内置插件会将新的审计日志写入到新的表中,而不是原有的审计日志表中。
-> 
-> 2. 如果是 Doris 2.1 之前的版本,请参阅之后的 **编译、配置和部署** 章节。
+:::note
+如果是 Doris 2.1 之前的版本,请 2.0 版本文档。
+:::
+
+:::warning
+升级到 2.1 版本后,原有的审计日志插件将不可用。请参阅 **审计日志迁移** 章节查看如何迁移审计日志表数据。
+:::
 
 ### 开启插件
 
@@ -66,7 +66,7 @@ Doris 的审计日志插件是在 FE 的插件框架基础上开发的。是一
 
 审计日志表是一张动态分区表,按天分区,默认保留最近 30 天的数据。
 
-以下 3 个全局变量可以控制审计日志表的一些写入行为:
+以下全局变量可以控制审计日志表的一些写入行为:
 
 - `audit_plugin_max_batch_interval_sec`:审计日志表的最大写入间隔。默认 60 秒。
 - `audit_plugin_max_batch_bytes`:审计日志表每批次最大写入数据量。默认 50MB。
@@ -87,159 +87,72 @@ FE 配置项:
     skip_audit_user_list=user1,user2
     ```
 
-## 编译、配置和部署
-
-Doris 2.1 版本之前的用户,请参阅如下方式使用审计日志插件。
-
-### FE 配置
-
-审计日志插件框架在 Doris 中是默认开启的的,由 FE 的配置 `plugin_enable` 控制
-
-### AuditLoader 配置
-
-1. 下载 Audit Loader 插件
-
-    Audit Loader 插件在 Doris 的发行版中默认提供,通过 
[DOWNLOAD](https://doris.apache.org/zh-CN/download) 下载 Doris 安装包解压并进入目录后即可在 
extensions/audit_loader 子目录下找到 auditloader.zip 文件。
-
-2. 解压安装包
-
-    ```shell
-    unzip auditloader.zip
+## 审计日志迁移
+
+升级到 2.1 版本后,原有的审计日志插件将不可用。本小节介绍如何将原有审计日志表中的数据迁移到新的审计日志表中。
+
+1. 确认新旧审计日志表的字段信息
+
+    原有审计日志表默认情况下应为:`doris_audit_db__`.`doris_audit_log_tbl__`。
+
+    新的审计日志表为:`__internal_schema`.`audit_log`
+
+    可以通过 `DESC table_name` 命令查看两种表的字段信息是否匹配。通常情况下,旧表的字段应为新表的子集。
+
+2. 迁移审计日志表数据。
+
+    可以使用如下语句将原表中数据迁移到新表中:
+
+    ```sql
+    INSERT INTO __internal_schema.audit_log (
+    query_id         ,
+    time             ,
+    client_ip        ,
+    user             ,
+    db               ,
+    state            ,
+    error_code       ,
+    error_message    ,
+    query_time       ,
+    scan_bytes       ,
+    scan_rows        ,
+    return_rows      ,
+    stmt_id          ,
+    is_query         ,
+    frontend_ip      ,
+    cpu_time_ms      ,
+    sql_hash         ,
+    sql_digest       ,
+    peak_memory_bytes,
+    stmt
+    )
+    SELECT
+    query_id         ,
+    time             ,
+    client_ip        ,
+    user             ,
+    db               ,
+    state            ,
+    error_code       ,
+    error_message    ,
+    query_time       ,
+    scan_bytes       ,
+    scan_rows        ,
+    return_rows      ,
+    stmt_id          ,
+    is_query         ,
+    frontend_ip      ,
+    cpu_time_ms      ,
+    sql_hash         ,
+    sql_digest       ,
+    peak_memory_bytes,
+    stmt
+    FROM doris_audit_db__.doris_audit_log_tbl__;
     ```
 
-    解压生成以下文件:
-
-    * auditloader.jar:插件代码包。
-    * plugin.properties:插件属性文件。
-    * plugin.conf:插件配置文件。
-
-您可以将这个文件放置在一个 http 服务器上,或者拷贝`auditloader.zip`(或者解压`auditloader.zip`) 到所有 FE 
的指定目录下。这里我们使用后者。  
-该插件的安装可以参阅 
[INSTALL](../sql-manual/sql-statements/Database-Administration-Statements/INSTALL-PLUGIN.md)
  
-执行 install 后会自动生成 AuditLoader 目录
-
-3. 修改 plugin.conf 
-
-    以下配置可供修改:
+3. 删除原有插件
 
-    * frontend_host_port:FE 节点 IP 地址和 HTTP 端口,格式为 <fe_ip>:<fe_http_port>。默认值为 
127.0.0.1:8030。
-    * database:审计日志库名。
-    * audit_log_table:审计日志表名。
-    * slow_log_table:慢查询日志表名。
-    * enable_slow_log:是否开启慢查询日志导入功能。默认值为 false。可以在 FE 配置项中配置慢查询的阈值,参数为 
qe_slow_log_ms,默认 5s。
-    * user:集群用户名。该用户必须具有对应表的 INSERT 权限。
-    * password:集群用户密码。
-
-4. 重新打包 Audit Loader 插件
-
-    ```shell
-    zip -r -q -m auditloader.zip auditloader.jar plugin.properties plugin.conf
-    ```
-
-### 创建库表
-
-在 Doris 中,需要创建审计日志的库和表,表结构如下:
-
-若需开启慢查询日志导入功能,还需要额外创建慢表 `doris_slow_log_tbl__`,其表结构与 `doris_audit_log_tbl__` 
一致。
-
-其中 `dynamic_partition` 属性根据自己的需要,选择审计日志保留的天数。
-
-```sql
-create database doris_audit_db__;
-
-create table doris_audit_db__.doris_audit_log_tbl__
-(
-    query_id varchar(48) comment "Unique query id",
-    `time` datetime not null comment "Query start time",
-    client_ip varchar(32) comment "Client IP",
-    user varchar(64) comment "User name",
-    db varchar(96) comment "Database of this query",
-    state varchar(8) comment "Query result state. EOF, ERR, OK",
-    error_code int comment "Error code of failing query.",
-    error_message string comment "Error message of failing query.",
-    query_time bigint comment "Query execution time in millisecond",
-    scan_bytes bigint comment "Total scan bytes of this query",
-    scan_rows bigint comment "Total scan rows of this query",
-    return_rows bigint comment "Returned rows of this query",
-    stmt_id int comment "An incremental id of statement",
-    is_query tinyint comment "Is this statemt a query. 1 or 0",
-    frontend_ip varchar(32) comment "Frontend ip of executing this statement",
-    cpu_time_ms bigint comment "Total scan cpu time in millisecond of this 
query",
-    sql_hash varchar(48) comment "Hash value for this query",
-    sql_digest varchar(48) comment "Sql digest of this query, will be empty if 
not a slow query",
-    peak_memory_bytes bigint comment "Peak memory bytes used on all backends 
of this query",
-    stmt string comment "The original statement, trimed if longer than 2G"
-) engine=OLAP
-duplicate key(query_id, `time`, client_ip)
-partition by range(`time`) ()
-distributed by hash(query_id) buckets 1
-properties(
-    "dynamic_partition.time_unit" = "DAY",
-    "dynamic_partition.start" = "-30",
-    "dynamic_partition.end" = "3",
-    "dynamic_partition.prefix" = "p",
-    "dynamic_partition.buckets" = "1",
-    "dynamic_partition.enable" = "true",
-    "replication_num" = "3"
-);
-
-create table doris_audit_db__.doris_slow_log_tbl__
-(
-    query_id varchar(48) comment "Unique query id",
-    `time` datetime not null comment "Query start time",
-    client_ip varchar(32) comment "Client IP",
-    user varchar(64) comment "User name",
-    db varchar(96) comment "Database of this query",
-    state varchar(8) comment "Query result state. EOF, ERR, OK",
-    error_code int comment "Error code of failing query.",
-    error_message string comment "Error message of failing query.",
-    query_time bigint comment "Query execution time in millisecond",
-    scan_bytes bigint comment "Total scan bytes of this query",
-    scan_rows bigint comment "Total scan rows of this query",
-    return_rows bigint comment "Returned rows of this query",
-    stmt_id int comment "An incremental id of statement",
-    is_query tinyint comment "Is this statemt a query. 1 or 0",
-    frontend_ip varchar(32) comment "Frontend ip of executing this statement",
-    cpu_time_ms bigint comment "Total scan cpu time in millisecond of this 
query",
-    sql_hash varchar(48) comment "Hash value for this query",
-    sql_digest varchar(48) comment "Sql digest of a slow query",
-    peak_memory_bytes bigint comment "Peak memory bytes used on all backends 
of this query",
-    stmt string comment "The original statement, trimed if longer than 2G "
-) engine=OLAP
-duplicate key(query_id, `time`, client_ip)
-partition by range(`time`) ()
-distributed by hash(query_id) buckets 1
-properties(
-    "dynamic_partition.time_unit" = "DAY",
-    "dynamic_partition.start" = "-30",
-    "dynamic_partition.end" = "3",
-    "dynamic_partition.prefix" = "p",
-    "dynamic_partition.buckets" = "1",
-    "dynamic_partition.enable" = "true",
-    "replication_num" = "3"
-);
-```
-
->**注意**
->
-> 上面表结构中:stmt string,这个只能在 0.15 及之后版本中使用,之前版本,字段类型使用 varchar
-
-### 部署
-
-您可以将打包好的 auditloader.zip 放置在一个 http 服务器上,或者拷贝 `auditloader.zip` 到所有 FE 
的相同指定目录下。
-
-### 安装
-
-通过以下语句安装 Audit Loader 插件:
-
-```sql
-INSTALL PLUGIN FROM [source] [PROPERTIES ("key"="value", ...)]
-```
-
-详细命令参考:[INSTALL-PLUGIN](../sql-manual/sql-statements/Database-Administration-Statements/INSTALL-PLUGIN)
-
-安装成功后,可以通过 `SHOW PLUGINS` 看到已经安装的插件,并且状态为 `INSTALLED`。
-
-完成后,插件会不断的以指定的时间间隔将审计日志插入到这个表中。
+    迁移后,可以通过 `UNINSTALL PLUGIN AuditLoader;` 命令删除原有插件即可。
 
 ## FAQ
 
@@ -264,3 +177,4 @@ INSTALL PLUGIN FROM [source] [PROPERTIES ("key"="value", 
...)]
     - 在 Master FE 的 `fe.log` 中搜索 `AuditLoad` 字样,查看是否有相关错误日志
 
         审计日志是通过内部的 stream load 操作导入到表中的,有可能是导入流程出现了问题,这些问题会在 `fe.log` 中打印错误日志。
+
diff --git a/versioned_docs/version-2.0/admin-manual/audit-plugin.md 
b/versioned_docs/version-2.0/admin-manual/audit-plugin.md
index d80e7f733d8..ff20ea24158 100644
--- a/versioned_docs/version-2.0/admin-manual/audit-plugin.md
+++ b/versioned_docs/version-2.0/admin-manual/audit-plugin.md
@@ -1,6 +1,6 @@
 ---
 {
-    "title": "Using Audit Log Plugin",
+    "title": "Audit Log Plugin",
     "language": "en"
 }
 ---
@@ -24,8 +24,6 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# Audit Log Plugin
-
 Doris's audit log plugin was developed based on FE's plugin framework. Is an 
optional plugin. Users can install or uninstall this plugin at runtime.
 
 This plugin can periodically import the FE audit log into the specified Doris 
cluster, so that users can easily view and analyze the audit log through SQL.
@@ -156,9 +154,9 @@ properties(
 );
 ```
 
->**Notice**
->
-> In the above table structure: stmt string, this can only be used in 0.15 and 
later versions, in previous versions, the field type used varchar
+:::caution
+In the above table structure: stmt string, this can only be used in 0.15 and 
later versions, in previous versions, the field type used varchar
+:::
 
 ### Deployment
 
@@ -172,7 +170,7 @@ You can place the packaged auditloader.zip on an http 
server, or copy `auditload
 INSTALL PLUGIN FROM [source] [PROPERTIES ("key"="value", ...)]
 ```
 
-Detailed command reference: 
[INSTALL-PLUGIN.md](../sql-manual/sql-reference/Database-Administration-Statements/INSTALL-PLUGIN)
+Detailed command reference: 
[INSTALL](../sql-manual/sql-reference/Database-Administration-Statements/INSTALL-PLUGIN.md)
 
 After successful installation, you can see the installed plug-ins through 
`SHOW PLUGINS`, and the status is `INSTALLED`.
 
diff --git a/versioned_docs/version-2.1/admin-manual/audit-plugin.md 
b/versioned_docs/version-2.1/admin-manual/audit-plugin.md
index e766379e2f2..b1399800c7f 100644
--- a/versioned_docs/version-2.1/admin-manual/audit-plugin.md
+++ b/versioned_docs/version-2.1/admin-manual/audit-plugin.md
@@ -1,6 +1,6 @@
 ---
 {
-    "title": "Using Audit Log Plugin",
+    "title": "Audit Log Plugin",
     "language": "en"
 }
 ---
@@ -24,34 +24,35 @@ specific language governing permissions and limitations
 under the License.
 -->
 
+This plugin can regularly import the audit logs of FE into a specified system 
table, making it convenient for users to view and analyze audit logs through 
SQL.
 
-Doris's audit log plugin was developed based on FE's plugin framework. Is an 
optional plugin. Users can install or uninstall this plugin at runtime.
+## Using the Audit Log Plugin
 
-This plugin can periodically import the FE audit log into the specified system 
table, so that users can easily view and analyze the audit log through SQL.
+Starting from Doris 2.1, the audit log plugin is integrated directly into the 
Doris kernel as a built-in plugin. Users do not need to install the plugin 
separately.
 
-## Use the audit log plug-in
+After the cluster starts, a system table named `audit_log` will be created 
under the `__internal_schema` database to store audit logs.
 
-Starting from Doris version 2.1, the audit log plug-in is directly integrated 
into the Doris as a built-in plug-in. Users do not need to install additional 
plug-ins.
+:::note
+For versions prior to Doris 2.1, please refer to the 2.0 version documentation.
+:::
 
-After the cluster is started, a system table named `audit_log` will be created 
under the `__internal_schema` database to store audit logs.
+:::warning
+After upgrading to version 2.1, the original audit log plugin will be 
unavailable. Refer to the **Audit Log Migration** section to see how to migrate 
audit log table data.
+:::
 
-> 1. If you upgrade from an old version, you can continue to use the previous 
plug-in. You can also uninstall the previous plug-in and use the new built-in 
plug-in. But note that the built-in plug-in will write the new audit log to a 
new table instead of the original audit log table.
->
-> 2. If it is a version before Doris 2.1, please refer to the following 
**Compilation, Configuration and Deployment** chapters.
+### Enabling the Plugin
 
-### Enable plug-in
-
-The audit log plug-in can be turned on or off at any time through the global 
variable `enable_audit_plugin` (the default is off), such as:
+The audit log plugin can be enabled or disabled at any time using the global 
variable `enable_audit_plugin` (default is disabled), for example:
 
 `set global enable_audit_plugin = true;`
 
-After it is enabled, Doris will write the audit log after it is enabled to the 
`audit_log` table.
+Once enabled, Doris will write the enabled audit logs to the `audit_log` table.
 
-The audit log plugin can be turned off at any time:
+The audit log plugin can be disabled at any time:
 
 `set global enable_audit_plugin = false;`
 
-After disable, Doris will stop writing to the `audit_log` table. Audit logs 
that have been written will not change.
+After disabling, Doris will stop writing to the `audit_log` table. The 
existing audit logs will not be affected.
 
 ### Audit log table
 
@@ -61,11 +62,11 @@ Starting from version 2.1.8 and 3.0.3, the `audit_log` 
system table will automat
 
 In previous versions, users need to manually add fields to the `audit_log` 
system table through the `ALTER TABLE` command.
 
-### Related configuration
+### Related configurations
 
-The audit log table is a dynamic partitioned table, partitioned by day, and 
retains the data of the last 30 days by default.
+The audit log table is a dynamic partitioned table, partitioned by day, and 
retains data for the last 30 days by default.
 
-The following 3 global variables can control some writing behaviors of the 
audit log table:
+The following global variables can control some writing behaviors of the audit 
log table:
 
 - `audit_plugin_max_batch_interval_sec`: The maximum write interval for the 
audit log table. Default 60 seconds.
 - `audit_plugin_max_batch_bytes`: The maximum amount of data written in each 
batch of the audit log table. Default 50MB.
@@ -92,172 +93,107 @@ FE configurations:
 
 The audit log plug-in framework is enabled by default in Doris and is 
controlled by the FE configuration `plugin_enable`
 
-### AuditLoader Configuration
+These can be set using `set global xxx=yyy`.
 
-1. Download the Audit Loader plugin
+FE Configuration:
 
-   The Audit Loader plug-in is provided by default in the Doris distribution. 
After downloading the Doris installation package through 
[DOWNLOAD](https://doris.apache.org/download), decompress it and enter the 
directory, you can find the auditloader.zip file in the extensionsaudit_loader 
subdirectory.
+- `skip_audit_user_list` (supported since 3.0.1)
 
-2. Unzip the installation package
+    If you do not want the operations of certain users to be recorded in the 
audit log, you can modify this configuration.
 
-    ```shell
-    unzip auditloader.zip
     ```
-
-    Unzip and generate the following files:
-
-    * auditloader.jar: plug-in code package.
-    * plugin.properties: plugin properties file.
-    * plugin.conf: plugin configuration file.
-
-You can place this file on an http download server or copy(or unzip) it to the 
specified directory of all FEs. Here we use the latter.  
-The installation of this plugin can be found in 
[INSTALL](../sql-manual/sql-statements/Database-Administration-Statements/INSTALL-PLUGIN.md)
  
-After executing install, the AuditLoader directory will be automatically 
generated.
-
-3. Modify plugin.conf
-
-   The following configurations are available for modification:
-
-    * frontend_host_port: FE node IP address and HTTP port in the format 
<fe_ip>:<fe_http_port>. The default value is 127.0.0.1:8030.
-    * database: Audit log database name.
-    * audit_log_table: Audit log table name.
-    * slow_log_table: Slow query log table name.
-    * enable_slow_log: Whether to enable the slow query log import function. 
The default value is false. You can set the slow query threshold in the FE 
configuration item. The parameter is qe_slow_log_ms and the default value is 5s.
-    * user: Cluster username. The user must have INSERT permission on the 
corresponding table.
-    * password: Cluster user password.
-
-4. Repackaging the Audit Loader plugin
-
-    ```shell
-    zip -r -q -m auditloader.zip auditloader.jar plugin.properties plugin.conf
+    skip_audit_user_list=root
+    -- or
+    skip_audit_user_list=user1,user2
     ```
 
-### Create Audit Table
-
-In Doris, you need to create the library and table of the audit log. The table 
structure is as follows:
-
-If you need to enable the slow query log import function, you need to create 
an additional slow table `doris_slow_log_tbl__`, whose table structure is 
consistent with `doris_audit_log_tbl__`.
-
-Among them, the `dynamic_partition` attribute selects the number of days for 
audit log retention according to your own needs.
-
-```sql
-create database doris_audit_db__;
-
-create table doris_audit_db__.doris_audit_log_tbl__
-(
-    query_id varchar(48) comment "Unique query id",
-    `time` datetime not null comment "Query start time",
-    client_ip varchar(32) comment "Client IP",
-    user varchar(64) comment "User name",
-    db varchar(96) comment "Database of this query",
-    state varchar(8) comment "Query result state. EOF, ERR, OK",
-    error_code int comment "Error code of failing query.",
-    error_message string comment "Error message of failing query.",
-    query_time bigint comment "Query execution time in millisecond",
-    scan_bytes bigint comment "Total scan bytes of this query",
-    scan_rows bigint comment "Total scan rows of this query",
-    return_rows bigint comment "Returned rows of this query",
-    stmt_id int comment "An incremental id of statement",
-    is_query tinyint comment "Is this statemt a query. 1 or 0",
-    frontend_ip varchar(32) comment "Frontend ip of executing this statement",
-    cpu_time_ms bigint comment "Total scan cpu time in millisecond of this 
query",
-    sql_hash varchar(48) comment "Hash value for this query",
-    sql_digest varchar(48) comment "Sql digest of this query, will be empty if 
not a slow query",
-    peak_memory_bytes bigint comment "Peak memory bytes used on all backends 
of this query",
-    stmt string comment "The original statement, trimed if longer than 2G"
-) engine=OLAP
-duplicate key(query_id, `time`, client_ip)
-partition by range(`time`) ()
-distributed by hash(query_id) buckets 1
-properties(
-    "dynamic_partition.time_unit" = "DAY",
-    "dynamic_partition.start" = "-30",
-    "dynamic_partition.end" = "3",
-    "dynamic_partition.prefix" = "p",
-    "dynamic_partition.buckets" = "1",
-    "dynamic_partition.enable" = "true",
-    "replication_num" = "3"
-);
-
-create table doris_audit_db__.doris_slow_log_tbl__
-(
-    query_id varchar(48) comment "Unique query id",
-    `time` datetime not null comment "Query start time",
-    client_ip varchar(32) comment "Client IP",
-    user varchar(64) comment "User name",
-    db varchar(96) comment "Database of this query",
-    state varchar(8) comment "Query result state. EOF, ERR, OK",
-    error_code int comment "Error code of failing query.",
-    error_message string comment "Error message of failing query.",
-    query_time bigint comment "Query execution time in millisecond",
-    scan_bytes bigint comment "Total scan bytes of this query",
-    scan_rows bigint comment "Total scan rows of this query",
-    return_rows bigint comment "Returned rows of this query",
-    stmt_id int comment "An incremental id of statement",
-    is_query tinyint comment "Is this statemt a query. 1 or 0",
-    frontend_ip varchar(32) comment "Frontend ip of executing this statement",
-    cpu_time_ms bigint comment "Total scan cpu time in millisecond of this 
query",
-    sql_hash varchar(48) comment "Hash value for this query",
-    sql_digest varchar(48) comment "Sql digest of a slow query",
-    peak_memory_bytes bigint comment "Peak memory bytes used on all backends 
of this query",
-    stmt string comment "The original statement, trimed if longer than 2G "
-) engine=OLAP
-duplicate key(query_id, `time`, client_ip)
-partition by range(`time`) ()
-distributed by hash(query_id) buckets 1
-properties(
-    "dynamic_partition.time_unit" = "DAY",
-    "dynamic_partition.start" = "-30",
-    "dynamic_partition.end" = "3",
-    "dynamic_partition.prefix" = "p",
-    "dynamic_partition.buckets" = "1",
-    "dynamic_partition.enable" = "true",
-    "replication_num" = "3"
-);
-```
-
->**Notice**
->
-> In the above table structure: stmt string, this can only be used in 0.15 and 
later versions, in previous versions, the field type used varchar
-
-### Deployment
-
-You can place the packaged auditloader.zip on an http server, or copy 
`auditloader.zip` to the same specified directory in all FEs.
-
-### Installation
-
-Install the audit loader plugin:
-
-```sql
-INSTALL PLUGIN FROM [source] [PROPERTIES ("key"="value", ...)]
-```
-
-Detailed command reference: 
[INSTALL-PLUGIN.md](../sql-manual/sql-statements/Database-Administration-Statements/INSTALL-PLUGIN)
-
-After successful installation, you can see the installed plug-ins through 
`SHOW PLUGINS`, and the status is `INSTALLED`.
-
-After completion, the plugin will continuously insert audit logs into this 
table at specified intervals.
-
-## FAQ
-
-1. There is no data in the audit log table, or no new data is imported after 
running for a period of time.
+## Audit Log Migration
+
+After upgrading to version 2.1, the original audit log plugin will be 
unavailable. This section explains how to migrate data from the original audit 
log table to the new audit log table.
+
+1. Confirm the field information of the old and new audit log tables.
+
+    The default audit log table should be 
`doris_audit_db__`.`doris_audit_log_tbl__`.
+    
+    The new audit log table is `__internal_schema`.`audit_log`.
+    
+    You can check if the field information of the two tables matches by using 
the `DESC table_name` command. Typically, the fields of the old table should be 
a subset of the new table.
+
+2. Migrate Audit Log Table Data
+
+    You can use the following statement to migrate data from the original 
table to the new table:
+    
+    ```sql
+    INSERT INTO __internal_schema.audit_log (
+        query_id         ,
+        time             ,
+        client_ip        ,
+        user             ,
+        db               ,
+        state            ,
+        error_code       ,
+        error_message    ,
+        query_time       ,
+        scan_bytes       ,
+        scan_rows        ,
+        return_rows      ,
+        stmt_id          ,
+        is_query         ,
+        frontend_ip      ,
+        cpu_time_ms      ,
+        sql_hash         ,
+        sql_digest       ,
+        peak_memory_bytes,
+        stmt
+        )
+        SELECT
+        query_id         ,
+        time             ,
+        client_ip        ,
+        user             ,
+        db               ,
+        state            ,
+        error_code       ,
+        error_message    ,
+        query_time       ,
+        scan_bytes       ,
+        scan_rows        ,
+        return_rows      ,
+        stmt_id          ,
+        is_query         ,
+        frontend_ip      ,
+        cpu_time_ms      ,
+        sql_hash         ,
+        sql_digest       ,
+        peak_memory_bytes,
+        stmt
+        FROM doris_audit_db__.doris_audit_log_tbl__;
+    ```
 
-     You can check by following these steps:
+3. Remove Original Plugin
 
-     - Check whether the partition was created normally
+    After migration, you can remove the original plugin by using the 
`UNINSTALL PLUGIN AuditLoader;` command.
 
-         The audit log table is a dynamic partition table, partitioned by day. 
By default, partitions for the next 3 days will be created and partitions for 
the past 30 days will be retained. Only after the partition is created 
correctly can the audit log be imported normally.
+## FAQ
 
-         You can use `show dynamic partition tables from __internal_schema` to 
check the scheduling of dynamic partitions and troubleshoot according to the 
cause of the error. Possible reasons for the error include:
+1. No data in the audit log table, or no new data is being ingested after 
running for a period of time
 
-         - The number of BE nodes is less than the required number of 
replicas: the audit log table has 3 replicas by default, so at least 3 BE nodes 
are required. Or modify the number of replicas through the `alter table` 
statement, such as:
+    You can troubleshoot by following these steps:
+    
+    - Check if partitions are created correctly
 
-             `alter table __internal_schema.audit_log set 
("dynamic_partition.replication_num" = "2")`
+        The audit log table is a dynamic partition table partitioned by day. 
By default, it creates partitions for the next 3 days and retains historical 
partitions for 30 days. Data can only be written to the audit log if partitions 
are created correctly.
 
-         - No suitable storage medium: You can view the `storage_medium` 
attribute through `show create table __internal_schema.audit_log`. If BE does 
not have a corresponding storage medium, the partition creation may fail.
+        You can check the scheduling status of dynamic partitions by using 
`show dynamic partition tables from __internal_schema` and troubleshoot based 
on error reasons. Possible error reasons may include:
 
-         - No suitable resource group: The audit log table defaults to the 
`default` resource group. You can use the `show backends` command to check 
whether the resource has sufficient node resources.
+        - Number of nodes is less than the required replication number: The 
audit log table defaults to 3 replicas, so at least 3 BE nodes are required. 
You can modify the replication number using the `alter table` statement, for 
example:
+        
+            `alter table __internal_schema.audit_log set 
("dynamic_partition.replication_num" = "2")`
+        
+        - No suitable storage medium: You can check the `storage_medium` 
property by using `show create table __internal_schema.audit_log`. If there is 
no corresponding storage medium on the BE, partition creation may fail.
+        
+        - No suitable resource group: The audit log table defaults to the 
default resource group. You can check if the resource group has enough node 
resources by using the `show backends` command.
 
-     - Search for the word `AuditLoad` in Master FE's `fe.log` to see if there 
are related error logs
+    - Search for `AuditLoad` in the `fe.log` on the Master FE to see if there 
are any related error logs
 
-         The audit log is imported into the table through the internal stream 
load operation. There may be problems with the import process. These problems 
will print error logs in `fe.log`.
+        The audit log is imported into the table through internal stream load 
operations. If there are issues with the import process, error logs will be 
printed in the `fe.log`.
diff --git a/versioned_docs/version-3.0/admin-manual/audit-plugin.md 
b/versioned_docs/version-3.0/admin-manual/audit-plugin.md
index e766379e2f2..9f60c93ef2f 100644
--- a/versioned_docs/version-3.0/admin-manual/audit-plugin.md
+++ b/versioned_docs/version-3.0/admin-manual/audit-plugin.md
@@ -1,6 +1,6 @@
 ---
 {
-    "title": "Using Audit Log Plugin",
+    "title": "Audit Log Plugin",
     "language": "en"
 }
 ---
@@ -24,34 +24,35 @@ specific language governing permissions and limitations
 under the License.
 -->
 
+This plugin can regularly import the audit logs of FE into a specified system 
table, making it convenient for users to view and analyze audit logs through 
SQL.
 
-Doris's audit log plugin was developed based on FE's plugin framework. Is an 
optional plugin. Users can install or uninstall this plugin at runtime.
+## Using the Audit Log Plugin
 
-This plugin can periodically import the FE audit log into the specified system 
table, so that users can easily view and analyze the audit log through SQL.
+Starting from Doris 2.1, the audit log plugin is integrated directly into the 
Doris kernel as a built-in plugin. Users do not need to install the plugin 
separately.
 
-## Use the audit log plug-in
+After the cluster starts, a system table named `audit_log` will be created 
under the `__internal_schema` database to store audit logs.
 
-Starting from Doris version 2.1, the audit log plug-in is directly integrated 
into the Doris as a built-in plug-in. Users do not need to install additional 
plug-ins.
+:::note
+For versions prior to Doris 2.1, please refer to the 2.0 version documentation.
+:::
 
-After the cluster is started, a system table named `audit_log` will be created 
under the `__internal_schema` database to store audit logs.
+:::warning
+After upgrading to version 2.1, the original audit log plugin will be 
unavailable. Refer to the **Audit Log Migration** section to see how to migrate 
audit log table data.
+:::
 
-> 1. If you upgrade from an old version, you can continue to use the previous 
plug-in. You can also uninstall the previous plug-in and use the new built-in 
plug-in. But note that the built-in plug-in will write the new audit log to a 
new table instead of the original audit log table.
->
-> 2. If it is a version before Doris 2.1, please refer to the following 
**Compilation, Configuration and Deployment** chapters.
+### Enabling the Plugin
 
-### Enable plug-in
-
-The audit log plug-in can be turned on or off at any time through the global 
variable `enable_audit_plugin` (the default is off), such as:
+The audit log plugin can be enabled or disabled at any time using the global 
variable `enable_audit_plugin` (default is disabled), for example:
 
 `set global enable_audit_plugin = true;`
 
-After it is enabled, Doris will write the audit log after it is enabled to the 
`audit_log` table.
+Once enabled, Doris will write the enabled audit logs to the `audit_log` table.
 
-The audit log plugin can be turned off at any time:
+The audit log plugin can be disabled at any time:
 
 `set global enable_audit_plugin = false;`
 
-After disable, Doris will stop writing to the `audit_log` table. Audit logs 
that have been written will not change.
+After disabling, Doris will stop writing to the `audit_log` table. The 
existing audit logs will not be affected.
 
 ### Audit log table
 
@@ -61,24 +62,24 @@ Starting from version 2.1.8 and 3.0.3, the `audit_log` 
system table will automat
 
 In previous versions, users need to manually add fields to the `audit_log` 
system table through the `ALTER TABLE` command.
 
-### Related configuration
+### Related configurations
 
-The audit log table is a dynamic partitioned table, partitioned by day, and 
retains the data of the last 30 days by default.
+The audit log table is a dynamic partitioned table, partitioned by day, and 
retains data for the last 30 days by default.
 
-The following 3 global variables can control some writing behaviors of the 
audit log table:
+The following global variables can control some writing behaviors of the audit 
log table:
 
-- `audit_plugin_max_batch_interval_sec`: The maximum write interval for the 
audit log table. Default 60 seconds.
-- `audit_plugin_max_batch_bytes`: The maximum amount of data written in each 
batch of the audit log table. Default 50MB.
-- `audit_plugin_max_sql_length`: The maximum length of statements recorded in 
the audit log table. Default 4096.
-- `audit_plugin_load_timeout`: The default timeout of audit log load job. 
Default 600 seconds.
+- `audit_plugin_max_batch_interval_sec`: The maximum write interval for the 
audit log table. Default is 60 seconds.
+- `audit_plugin_max_batch_bytes`: The maximum amount of data written per batch 
to the audit log table. Default is 50MB.
+- `audit_plugin_max_sql_length`: The maximum length of statements recorded in 
the audit log table. Default is 4096.
+- `audit_plugin_load_timeout`: Default timeout for the audit log import job. 
Default is 600 seconds.
 
-Can be set via `set global xxx=yyy`.
+These can be set using `set global xxx=yyy`.
 
-FE configurations:
+FE Configuration:
 
-- `skip_audit_user_list` (Since 3.0.1)
+- `skip_audit_user_list` (supported since 3.0.1)
 
-    If you do not want certain users' operations to be recorded in the audit 
log, you can modify this configuration.
+    If you do not want the operations of certain users to be recorded in the 
audit log, you can modify this configuration.
 
     ```
     skip_audit_user_list=root
@@ -86,178 +87,93 @@ FE configurations:
     skip_audit_user_list=user1,user2
     ```
 
-## Compilation, Configuration and Deployment
-
-### FE Configuration
-
-The audit log plug-in framework is enabled by default in Doris and is 
controlled by the FE configuration `plugin_enable`
-
-### AuditLoader Configuration
-
-1. Download the Audit Loader plugin
-
-   The Audit Loader plug-in is provided by default in the Doris distribution. 
After downloading the Doris installation package through 
[DOWNLOAD](https://doris.apache.org/download), decompress it and enter the 
directory, you can find the auditloader.zip file in the extensionsaudit_loader 
subdirectory.
-
-2. Unzip the installation package
-
-    ```shell
-    unzip auditloader.zip
+## Audit Log Migration
+
+After upgrading to version 2.1, the original audit log plugin will be 
unavailable. This section explains how to migrate data from the original audit 
log table to the new audit log table.
+
+1. Confirm the field information of the old and new audit log tables.
+
+    The default audit log table should be 
`doris_audit_db__`.`doris_audit_log_tbl__`.
+    
+    The new audit log table is `__internal_schema`.`audit_log`.
+    
+    You can check if the field information of the two tables matches by using 
the `DESC table_name` command. Typically, the fields of the old table should be 
a subset of the new table.
+
+2. Migrate Audit Log Table Data
+
+    You can use the following statement to migrate data from the original 
table to the new table:
+    
+    ```sql
+    INSERT INTO __internal_schema.audit_log (
+        query_id         ,
+        time             ,
+        client_ip        ,
+        user             ,
+        db               ,
+        state            ,
+        error_code       ,
+        error_message    ,
+        query_time       ,
+        scan_bytes       ,
+        scan_rows        ,
+        return_rows      ,
+        stmt_id          ,
+        is_query         ,
+        frontend_ip      ,
+        cpu_time_ms      ,
+        sql_hash         ,
+        sql_digest       ,
+        peak_memory_bytes,
+        stmt
+        )
+        SELECT
+        query_id         ,
+        time             ,
+        client_ip        ,
+        user             ,
+        db               ,
+        state            ,
+        error_code       ,
+        error_message    ,
+        query_time       ,
+        scan_bytes       ,
+        scan_rows        ,
+        return_rows      ,
+        stmt_id          ,
+        is_query         ,
+        frontend_ip      ,
+        cpu_time_ms      ,
+        sql_hash         ,
+        sql_digest       ,
+        peak_memory_bytes,
+        stmt
+        FROM doris_audit_db__.doris_audit_log_tbl__;
     ```
 
-    Unzip and generate the following files:
-
-    * auditloader.jar: plug-in code package.
-    * plugin.properties: plugin properties file.
-    * plugin.conf: plugin configuration file.
-
-You can place this file on an http download server or copy(or unzip) it to the 
specified directory of all FEs. Here we use the latter.  
-The installation of this plugin can be found in 
[INSTALL](../sql-manual/sql-statements/Database-Administration-Statements/INSTALL-PLUGIN.md)
  
-After executing install, the AuditLoader directory will be automatically 
generated.
+3. Remove Original Plugin
 
-3. Modify plugin.conf
-
-   The following configurations are available for modification:
-
-    * frontend_host_port: FE node IP address and HTTP port in the format 
<fe_ip>:<fe_http_port>. The default value is 127.0.0.1:8030.
-    * database: Audit log database name.
-    * audit_log_table: Audit log table name.
-    * slow_log_table: Slow query log table name.
-    * enable_slow_log: Whether to enable the slow query log import function. 
The default value is false. You can set the slow query threshold in the FE 
configuration item. The parameter is qe_slow_log_ms and the default value is 5s.
-    * user: Cluster username. The user must have INSERT permission on the 
corresponding table.
-    * password: Cluster user password.
-
-4. Repackaging the Audit Loader plugin
-
-    ```shell
-    zip -r -q -m auditloader.zip auditloader.jar plugin.properties plugin.conf
-    ```
-
-### Create Audit Table
-
-In Doris, you need to create the library and table of the audit log. The table 
structure is as follows:
-
-If you need to enable the slow query log import function, you need to create 
an additional slow table `doris_slow_log_tbl__`, whose table structure is 
consistent with `doris_audit_log_tbl__`.
-
-Among them, the `dynamic_partition` attribute selects the number of days for 
audit log retention according to your own needs.
-
-```sql
-create database doris_audit_db__;
-
-create table doris_audit_db__.doris_audit_log_tbl__
-(
-    query_id varchar(48) comment "Unique query id",
-    `time` datetime not null comment "Query start time",
-    client_ip varchar(32) comment "Client IP",
-    user varchar(64) comment "User name",
-    db varchar(96) comment "Database of this query",
-    state varchar(8) comment "Query result state. EOF, ERR, OK",
-    error_code int comment "Error code of failing query.",
-    error_message string comment "Error message of failing query.",
-    query_time bigint comment "Query execution time in millisecond",
-    scan_bytes bigint comment "Total scan bytes of this query",
-    scan_rows bigint comment "Total scan rows of this query",
-    return_rows bigint comment "Returned rows of this query",
-    stmt_id int comment "An incremental id of statement",
-    is_query tinyint comment "Is this statemt a query. 1 or 0",
-    frontend_ip varchar(32) comment "Frontend ip of executing this statement",
-    cpu_time_ms bigint comment "Total scan cpu time in millisecond of this 
query",
-    sql_hash varchar(48) comment "Hash value for this query",
-    sql_digest varchar(48) comment "Sql digest of this query, will be empty if 
not a slow query",
-    peak_memory_bytes bigint comment "Peak memory bytes used on all backends 
of this query",
-    stmt string comment "The original statement, trimed if longer than 2G"
-) engine=OLAP
-duplicate key(query_id, `time`, client_ip)
-partition by range(`time`) ()
-distributed by hash(query_id) buckets 1
-properties(
-    "dynamic_partition.time_unit" = "DAY",
-    "dynamic_partition.start" = "-30",
-    "dynamic_partition.end" = "3",
-    "dynamic_partition.prefix" = "p",
-    "dynamic_partition.buckets" = "1",
-    "dynamic_partition.enable" = "true",
-    "replication_num" = "3"
-);
-
-create table doris_audit_db__.doris_slow_log_tbl__
-(
-    query_id varchar(48) comment "Unique query id",
-    `time` datetime not null comment "Query start time",
-    client_ip varchar(32) comment "Client IP",
-    user varchar(64) comment "User name",
-    db varchar(96) comment "Database of this query",
-    state varchar(8) comment "Query result state. EOF, ERR, OK",
-    error_code int comment "Error code of failing query.",
-    error_message string comment "Error message of failing query.",
-    query_time bigint comment "Query execution time in millisecond",
-    scan_bytes bigint comment "Total scan bytes of this query",
-    scan_rows bigint comment "Total scan rows of this query",
-    return_rows bigint comment "Returned rows of this query",
-    stmt_id int comment "An incremental id of statement",
-    is_query tinyint comment "Is this statemt a query. 1 or 0",
-    frontend_ip varchar(32) comment "Frontend ip of executing this statement",
-    cpu_time_ms bigint comment "Total scan cpu time in millisecond of this 
query",
-    sql_hash varchar(48) comment "Hash value for this query",
-    sql_digest varchar(48) comment "Sql digest of a slow query",
-    peak_memory_bytes bigint comment "Peak memory bytes used on all backends 
of this query",
-    stmt string comment "The original statement, trimed if longer than 2G "
-) engine=OLAP
-duplicate key(query_id, `time`, client_ip)
-partition by range(`time`) ()
-distributed by hash(query_id) buckets 1
-properties(
-    "dynamic_partition.time_unit" = "DAY",
-    "dynamic_partition.start" = "-30",
-    "dynamic_partition.end" = "3",
-    "dynamic_partition.prefix" = "p",
-    "dynamic_partition.buckets" = "1",
-    "dynamic_partition.enable" = "true",
-    "replication_num" = "3"
-);
-```
-
->**Notice**
->
-> In the above table structure: stmt string, this can only be used in 0.15 and 
later versions, in previous versions, the field type used varchar
-
-### Deployment
-
-You can place the packaged auditloader.zip on an http server, or copy 
`auditloader.zip` to the same specified directory in all FEs.
-
-### Installation
-
-Install the audit loader plugin:
-
-```sql
-INSTALL PLUGIN FROM [source] [PROPERTIES ("key"="value", ...)]
-```
-
-Detailed command reference: 
[INSTALL-PLUGIN.md](../sql-manual/sql-statements/Database-Administration-Statements/INSTALL-PLUGIN)
-
-After successful installation, you can see the installed plug-ins through 
`SHOW PLUGINS`, and the status is `INSTALLED`.
-
-After completion, the plugin will continuously insert audit logs into this 
table at specified intervals.
+    After migration, you can remove the original plugin by using the 
`UNINSTALL PLUGIN AuditLoader;` command.
 
 ## FAQ
 
-1. There is no data in the audit log table, or no new data is imported after 
running for a period of time.
-
-     You can check by following these steps:
-
-     - Check whether the partition was created normally
-
-         The audit log table is a dynamic partition table, partitioned by day. 
By default, partitions for the next 3 days will be created and partitions for 
the past 30 days will be retained. Only after the partition is created 
correctly can the audit log be imported normally.
-
-         You can use `show dynamic partition tables from __internal_schema` to 
check the scheduling of dynamic partitions and troubleshoot according to the 
cause of the error. Possible reasons for the error include:
+1. No data in the audit log table, or no new data is being ingested after 
running for a period of time
 
-         - The number of BE nodes is less than the required number of 
replicas: the audit log table has 3 replicas by default, so at least 3 BE nodes 
are required. Or modify the number of replicas through the `alter table` 
statement, such as:
+    You can troubleshoot by following these steps:
+    
+    - Check if partitions are created correctly
 
-             `alter table __internal_schema.audit_log set 
("dynamic_partition.replication_num" = "2")`
+        The audit log table is a dynamic partition table partitioned by day. 
By default, it creates partitions for the next 3 days and retains historical 
partitions for 30 days. Data can only be written to the audit log if partitions 
are created correctly.
 
-         - No suitable storage medium: You can view the `storage_medium` 
attribute through `show create table __internal_schema.audit_log`. If BE does 
not have a corresponding storage medium, the partition creation may fail.
+        You can check the scheduling status of dynamic partitions by using 
`show dynamic partition tables from __internal_schema` and troubleshoot based 
on error reasons. Possible error reasons may include:
 
-         - No suitable resource group: The audit log table defaults to the 
`default` resource group. You can use the `show backends` command to check 
whether the resource has sufficient node resources.
+        - Number of nodes is less than the required replication number: The 
audit log table defaults to 3 replicas, so at least 3 BE nodes are required. 
You can modify the replication number using the `alter table` statement, for 
example:
+        
+            `alter table __internal_schema.audit_log set 
("dynamic_partition.replication_num" = "2")`
+        
+        - No suitable storage medium: You can check the `storage_medium` 
property by using `show create table __internal_schema.audit_log`. If there is 
no corresponding storage medium on the BE, partition creation may fail.
+        
+        - No suitable resource group: The audit log table defaults to the 
default resource group. You can check if the resource group has enough node 
resources by using the `show backends` command.
 
-     - Search for the word `AuditLoad` in Master FE's `fe.log` to see if there 
are related error logs
+    - Search for `AuditLoad` in the `fe.log` on the Master FE to see if there 
are any related error logs
 
-         The audit log is imported into the table through the internal stream 
load operation. There may be problems with the import process. These problems 
will print error logs in `fe.log`.
+        The audit log is imported into the table through internal stream load 
operations. If there are issues with the import process, error logs will be 
printed in the `fe.log`.


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org
For additional commands, e-mail: commits-h...@doris.apache.org


Reply via email to