This is an automated email from the ASF dual-hosted git repository. jiafengzheng pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/doris-website.git
The following commit(s) were added to refs/heads/master by this push: new f1feb270964 Document correction, delete some old document content, add some explanatory information f1feb270964 is described below commit f1feb27096405850fb515f8543e972c3d2acccbc Author: jiafeng.zhang <zhang...@gmail.com> AuthorDate: Thu Aug 4 12:51:37 2022 +0800 Document correction, delete some old document content, add some explanatory information Document correction, delete some old document content, add some explanatory information --- docs/admin-manual/privilege-ldap/user-privilege.md | 2 +- docs/advanced/broker.md | 11 ++--------- docs/data-operate/export/export-manual.md | 2 +- .../import/import-scenes/external-storage-load.md | 4 ++-- docs/data-operate/import/import-way/binlog-load-manual.md | 4 ---- docs/data-operate/import/import-way/broker-load-manual.md | 2 ++ docs/data-operate/import/import-way/spark-load-manual.md | 4 ++++ docs/data-table/basic-usage.md | 14 +++++++++++++- docs/faq/install-faq.md | 2 +- docs/install/install-deploy.md | 2 +- .../Account-Management-Statements/CREATE-USER.md | 2 +- .../current/admin-manual/privilege-ldap/user-privilege.md | 2 +- .../current/advanced/broker.md | 8 ++------ .../current/data-operate/export/export-manual.md | 2 +- .../import/import-scenes/external-storage-load.md | 4 ++-- .../data-operate/import/import-way/binlog-load-manual.md | 4 ---- .../data-operate/import/import-way/broker-load-manual.md | 2 ++ .../data-operate/import/import-way/spark-load-manual.md | 4 ++++ .../current/data-table/basic-usage.md | 14 +++++++++++++- .../current/faq/install-faq.md | 2 +- .../current/install/install-deploy.md | 4 ++-- .../Account-Management-Statements/CREATE-USER.md | 2 +- 22 files changed, 57 insertions(+), 40 deletions(-) diff --git a/docs/admin-manual/privilege-ldap/user-privilege.md b/docs/admin-manual/privilege-ldap/user-privilege.md index 329b717ef5d..b5db18346c0 100644 --- a/docs/admin-manual/privilege-ldap/user-privilege.md +++ b/docs/admin-manual/privilege-ldap/user-privilege.md @@ -34,7 +34,7 @@ Doris's new privilege management system refers to Mysql's privilege management m In a permission system, a user is identified as a User Identity. User ID consists of two parts: username and userhost. Username is a user name, which is composed of English upper and lower case. Userhost represents the IP from which the user link comes. User_identity is presented as username@'userhost', representing the username from userhost. - Another expression of user_identity is username@['domain'], where domain is the domain name, which can be resolved into a set of IPS by DNS BNS (Baidu Name Service). The final expression is a set of username@'userhost', so we use username@'userhost'to represent it. + Another expression of user_identity is username@['domain'], where domain is the domain name, which can be resolved into a set of IPS by DNS . The final expression is a set of username@'userhost', so we use username@'userhost'to represent it. 2. Privilege diff --git a/docs/advanced/broker.md b/docs/advanced/broker.md index 2212839a3dd..7b463aaf020 100644 --- a/docs/advanced/broker.md +++ b/docs/advanced/broker.md @@ -63,14 +63,8 @@ Different types of brokers support different storage systems. * Support simple authentication access * Support kerberos authentication access * Support HDFS HA mode access - -2. Baidu HDFS / AFS (not supported by open source version) - - * Support UGI simple authentication access - -3. Baidu Object Storage BOS (not supported by open source version) - - * Support AK / SK authentication access +2. Object storage +- All object stores that support the S3 protocol ## Function provided by Broker @@ -200,4 +194,3 @@ Authentication information is usually provided as a Key-Value in the Property Ma ) ``` The configuration for accessing the HDFS cluster can be written to the hdfs-site.xml file. When users use the Broker process to read data from the HDFS cluster, they only need to fill in the cluster file path and authentication information. - diff --git a/docs/data-operate/export/export-manual.md b/docs/data-operate/export/export-manual.md index dbe05e7e9f8..221f2e1eebf 100644 --- a/docs/data-operate/export/export-manual.md +++ b/docs/data-operate/export/export-manual.md @@ -191,7 +191,7 @@ Usually, a query plan for an Export job has only two parts `scan`- `export`, and * If the amount of table data is too large, it is recommended to export it by partition. * During the operation of the Export job, if FE restarts or cuts the master, the Export job will fail, requiring the user to resubmit. * If the Export job fails, the `__doris_export_tmp_xxx` temporary directory generated in the remote storage and the generated files will not be deleted, requiring the user to delete them manually. -* If the Export job runs successfully, the `__doris_export_tmp_xxx` directory generated in the remote storage may be retained or cleared according to the file system semantics of the remote storage. For example, in Baidu Object Storage (BOS), after removing the last file in a directory through rename operation, the directory will also be deleted. If the directory is not cleared, the user can clear it manually. +* If the Export job runs successfully, the `__doris_export_tmp_xxx` directory generated in the remote storage may be retained or cleared according to the file system semantics of the remote storage. For example, in object storage (supporting the S3 protocol), after removing the last file in a directory through rename operation, the directory will also be deleted. If the directory is not cleared, the user can clear it manually. * When the Export runs successfully or fails, the FE reboots or cuts, then some information of the jobs displayed by `SHOW EXPORT` will be lost and cannot be viewed. * Export jobs only export data from Base tables, not Rollup Index. * Export jobs scan data and occupy IO resources, which may affect the query latency of the system. diff --git a/docs/data-operate/import/import-scenes/external-storage-load.md b/docs/data-operate/import/import-scenes/external-storage-load.md index 27eb0fb3cbe..f1e01469416 100644 --- a/docs/data-operate/import/import-scenes/external-storage-load.md +++ b/docs/data-operate/import/import-scenes/external-storage-load.md @@ -26,7 +26,7 @@ under the License. # External storage data import -The following mainly introduces how to import data stored in an external system. For example (HDFS, AWS S3, BOS of Baidu Cloud, OSS of Alibaba Cloud, COS of Tencent Cloud) +The following mainly introduces how to import data stored in an external system. For example (HDFS, All object stores that support the S3 protocol) ## HDFS LOAD ### Ready to work @@ -111,7 +111,7 @@ Hdfs load creates an import statement. The import method is basically the same a Starting from version 0.14, Doris supports the direct import of data from online storage systems that support the S3 protocol through the S3 protocol. -This document mainly introduces how to import data stored in AWS S3. It also supports the import of other object storage systems that support the S3 protocol, such as Baidu Cloud’s BOS, Alibaba Cloud’s OSS and Tencent Cloud’s COS, etc. +This document mainly introduces how to import data stored in AWS S3. It also supports the import of other object storage systems that support the S3 protocol. ### Applicable scenarios * Source data in S3 protocol accessible storage systems, such as S3, BOS. diff --git a/docs/data-operate/import/import-way/binlog-load-manual.md b/docs/data-operate/import/import-way/binlog-load-manual.md index 5b65ef4c18f..4930907ee61 100644 --- a/docs/data-operate/import/import-way/binlog-load-manual.md +++ b/docs/data-operate/import/import-way/binlog-load-manual.md @@ -482,10 +482,6 @@ Users can control the status of jobs through `stop/pause/resume` commands. You can use [STOP SYNC JOB](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/STOP-SYNC-JOB.md) ; [PAUSE SYNC JOB](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/PAUSE-SYNC-JOB.md); And [RESUME SYNC JOB](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/RESUME-SYNC-JOB.md); commands to view help and examples. -## Case Combat - -[How to use Apache Doris Binlog Load and examples](https://doris.apache.org/zh-CN/article/articles/doris-binlog-load.md) - ## Related Parameters ### Canal configuration diff --git a/docs/data-operate/import/import-way/broker-load-manual.md b/docs/data-operate/import/import-way/broker-load-manual.md index cd2140bc3e5..946e70bf979 100644 --- a/docs/data-operate/import/import-way/broker-load-manual.md +++ b/docs/data-operate/import/import-way/broker-load-manual.md @@ -28,6 +28,8 @@ under the License. Broker load is an asynchronous import method, and the supported data sources depend on the data sources supported by the [Broker](../../../advanced/broker.md) process. +Because the data in the Doris table is ordered, Broker load uses the doris cluster resources to sort the data when importing data. To complete the migration of massive historical data for Spark load, the Doris cluster resource usage is relatively large. , this method is used when the user does not have Spark computing resources. If there are Spark computing resources, it is recommended to use [Spark load](./SPARK-LOAD.md). + Users need to create [Broker load](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/BROKER-LOAD.md) import through MySQL protocol and import by viewing command to check the import result. ## Applicable scene diff --git a/docs/data-operate/import/import-way/spark-load-manual.md b/docs/data-operate/import/import-way/spark-load-manual.md index 76625addbde..d801d3af546 100644 --- a/docs/data-operate/import/import-way/spark-load-manual.md +++ b/docs/data-operate/import/import-way/spark-load-manual.md @@ -28,6 +28,10 @@ under the License. Spark load realizes the preprocessing of load data by spark, improves the performance of loading large amount of Doris data and saves the computing resources of Doris cluster. It is mainly used for the scene of initial migration and large amount of data imported into Doris. +Spark load uses the resources of the spark cluster to sort the data to be imported, and Doris be writes files directly, which can greatly reduce the resource usage of the Doris cluster, and is very good for historical mass data migration to reduce the resource usage and load of the Doris cluster. Effect. + +If users do not have the resources of Spark cluster and want to complete the migration of external storage historical data conveniently and quickly, they can use [Broker load](./BROKER-LOAD.md) . Compared with Spark load, importing Broker load will consume more resources on the Doris cluster. + Spark load is an asynchronous load method. Users need to create spark type load job by MySQL protocol and view the load results by `show load`. ## Applicable scenarios diff --git a/docs/data-table/basic-usage.md b/docs/data-table/basic-usage.md index 1212b6213a1..e35ff93730c 100644 --- a/docs/data-table/basic-usage.md +++ b/docs/data-table/basic-usage.md @@ -33,7 +33,19 @@ Doris uses MySQL protocol to communicate. Users can connect to Doris cluster thr ### Root User Logon and Password Modification -Doris has built-in root and admin users, and the password is empty by default. After starting the Doris program, you can connect to the Doris cluster through root or admin users. +Doris has built-in root and admin users, and the password is empty by default. + +>Remarks: +> +>The default root and admin users provided by Doris are admin users +> +>The >root user has all the privileges of the cluster by default. Users who have both Grant_priv and Node_priv can grant this permission to other users and have node change permissions, including operations such as adding, deleting, and going offline of FE, BE, and BROKER nodes. +> +>admin user has ADMIN_PRIV and GRANT_PRIV privileges +> +>For specific instructions on permissions, please refer to [Permission Management](/docs/admin-manual/privilege-ldap/user-privilege) + +After starting the Doris program, you can connect to the Doris cluster through root or admin users. Use the following command to log in to Doris: ```sql diff --git a/docs/faq/install-faq.md b/docs/faq/install-faq.md index 98019d7651c..dd3fb581ef6 100644 --- a/docs/faq/install-faq.md +++ b/docs/faq/install-faq.md @@ -57,7 +57,7 @@ A metadata log needs to be successfully written in most Follower nodes to be con The role of Observer is the same as the meaning of this word. It only acts as an observer to synchronize the metadata logs that have been successfully written, and provides metadata reading services. He will not be involved in the logic of the majority writing. -Typically, 1 Follower + 2 Observer or 3 Follower + N Observer can be deployed. The former is simple to operate and maintain, and there is almost no consistency agreement between followers to cause such complex error situations (most of Baidu's internal clusters use this method). The latter can ensure the high availability of metadata writing. If it is a high concurrent query scenario, Observer can be added appropriately. +Typically, 1 Follower + 2 Observer or 3 Follower + N Observer can be deployed. The former is simple to operate and maintain, and there is almost no consistency agreement between followers to cause such complex error situations (Most companies use this method). The latter can ensure the high availability of metadata writing. If it is a high concurrent query scenario, Observer can be added appropriately. ### Q4. A new disk is added to the node, why is the data not balanced to the new disk? diff --git a/docs/install/install-deploy.md b/docs/install/install-deploy.md index 45568374390..7eec424d112 100644 --- a/docs/install/install-deploy.md +++ b/docs/install/install-deploy.md @@ -246,7 +246,7 @@ See the section on `lower_case_table_names` variables in [Variables](../advanced #### (Optional) FS_Broker deployment -Broker is deployed as a plug-in, independent of Doris. If you need to import data from a third-party storage system, you need to deploy the corresponding Broker. By default, it provides fs_broker to read HDFS ,Baidu cloud BOS and Amazon S3. Fs_broker is stateless and it is recommended that each FE and BE node deploy a Broker. +Broker is deployed as a plug-in, independent of Doris. If you need to import data from a third-party storage system, you need to deploy the corresponding Broker. By default, it provides fs_broker to read HDFS ,Object storage (supporting S3 protocol). Fs_broker is stateless and it is recommended that each FE and BE node deploy a Broker. * Copy the corresponding Broker directory in the output directory of the source fs_broker to all the nodes that need to be deployed. It is recommended to maintain the same level as the BE or FE directories. diff --git a/docs/sql-manual/sql-reference/Account-Management-Statements/CREATE-USER.md b/docs/sql-manual/sql-reference/Account-Management-Statements/CREATE-USER.md index acac27cf558..62db8a2f6be 100644 --- a/docs/sql-manual/sql-reference/Account-Management-Statements/CREATE-USER.md +++ b/docs/sql-manual/sql-reference/Account-Management-Statements/CREATE-USER.md @@ -43,7 +43,7 @@ CREATE USER user_identity [IDENTIFIED BY 'password'] [DEFAULT ROLE 'role_name'] In Doris, a user_identity uniquely identifies a user. user_identity consists of two parts, user_name and host, where username is the username. host Identifies the host address where the client connects. The host part can use % for fuzzy matching. If no host is specified, it defaults to '%', which means the user can connect to Doris from any host. -The host part can also be specified as a domain, the syntax is: 'user_name'@['domain'], even if it is surrounded by square brackets, Doris will think this is a domain and try to resolve its ip address. Currently, only Baidu's internal BNS resolution is supported. +The host part can also be specified as a domain, the syntax is: 'user_name'@['domain'], even if it is surrounded by square brackets, Doris will think this is a domain and try to resolve its ip address. . If a role (ROLE) is specified, the newly created user will be automatically granted the permissions of the role. If not specified, the user has no permissions by default. The specified ROLE must already exist. diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/admin-manual/privilege-ldap/user-privilege.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/admin-manual/privilege-ldap/user-privilege.md index 976be837e12..c76f4384651 100644 --- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/admin-manual/privilege-ldap/user-privilege.md +++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/admin-manual/privilege-ldap/user-privilege.md @@ -34,7 +34,7 @@ Doris 新的权限管理系统参照了 Mysql 的权限管理机制,做到了 在权限系统中,一个用户被识别为一个 User Identity(用户标识)。用户标识由两部分组成:username 和 userhost。其中 username 为用户名,由英文大小写组成。userhost 表示该用户链接来自的 IP。user_identity 以 username@'userhost' 的方式呈现,表示来自 userhost 的 username。 - user_identity 的另一种表现方式为 username@['domain'],其中 domain 为域名,可以通过 DNS 或 BNS(百度名字服务)解析为一组 ip。最终表现为一组 username@'userhost',所以后面我们统一使用 username@'userhost' 来表示。 + user_identity 的另一种表现方式为 username@['domain'],其中 domain 为域名,可以通过 DNS 解析为一组 ip。最终表现为一组 username@'userhost',所以后面我们统一使用 username@'userhost' 来表示。 2. 权限 Privilege diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/advanced/broker.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/advanced/broker.md index b63432c5f4d..b7bd374efbc 100644 --- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/advanced/broker.md +++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/advanced/broker.md @@ -60,12 +60,8 @@ Broker 在 Doris 系统架构中的位置如下: - 支持简单认证访问 - 支持通过 kerberos 认证访问 - 支持 HDFS HA 模式访问 -2. 百度 HDFS/AFS(开源版本不支持) - - 支持通过 ugi 简单认证访问 -3. 百度对象存储 BOS(开源版本不支持) - - 支持通过 AK/SK 认证访问 - -## 需要 Broker 的操作 +2. 对象存储 + - 所有支持S3协议的对象存储 1. [Broker Load](../data-operate/import/import-way/broker-load-manual.md) 2. [数据导出(Export)](../data-operate/export/export-manual.md) diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-operate/export/export-manual.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-operate/export/export-manual.md index 94d7de39b8b..267061abd8e 100644 --- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-operate/export/export-manual.md +++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-operate/export/export-manual.md @@ -184,7 +184,7 @@ FinishTime: 2019-06-25 17:08:34 * 如果表数据量过大,建议按照分区导出。 * 在 Export 作业运行过程中,如果 FE 发生重启或切主,则 Export 作业会失败,需要用户重新提交。 * 如果 Export 作业运行失败,在远端存储中产生的 `__doris_export_tmp_xxx` 临时目录,以及已经生成的文件不会被删除,需要用户手动删除。 -* 如果 Export 作业运行成功,在远端存储中产生的 `__doris_export_tmp_xxx` 目录,根据远端存储的文件系统语义,可能会保留,也可能会被清除。比如在百度对象存储(BOS)中,通过 rename 操作将一个目录中的最后一个文件移走后,该目录也会被删除。如果该目录没有被清除,用户可以手动清除。 +* 如果 Export 作业运行成功,在远端存储中产生的 `__doris_export_tmp_xxx` 目录,根据远端存储的文件系统语义,可能会保留,也可能会被清除。比如对象存储(支持S3协议)中,通过 rename 操作将一个目录中的最后一个文件移走后,该目录也会被删除。如果该目录没有被清除,用户可以手动清除。 * 当 Export 运行完成后(成功或失败),FE 发生重启或切主,则 [SHOW EXPORT](../../sql-manual/sql-reference/Show-Statements/SHOW-EXPORT.md) 展示的作业的部分信息会丢失,无法查看。 * Export 作业只会导出 Base 表的数据,不会导出 Rollup Index 的数据。 * Export 作业会扫描数据,占用 IO 资源,可能会影响系统的查询延迟。 diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-operate/import/import-scenes/external-storage-load.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-operate/import/import-scenes/external-storage-load.md index 9887459ff9c..8496f1d56e8 100644 --- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-operate/import/import-scenes/external-storage-load.md +++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-operate/import/import-scenes/external-storage-load.md @@ -26,7 +26,7 @@ under the License. # 外部存储数据导入 -本文档主要介绍如何导入外部系统中存储的数据。例如(HDFS,AWS S3,百度云的BOS,阿里云的OSS和腾讯云的COS) +本文档主要介绍如何导入外部系统中存储的数据。例如(HDFS,所有支持S3协议的对象存储) ## HDFS LOAD @@ -116,7 +116,7 @@ Hdfs load 创建导入语句,导入方式和[Broker Load](../../../data-operat 从0.14 版本开始,Doris 支持通过S3协议直接从支持S3协议的在线存储系统导入数据。 -下面主要介绍如何导入 AWS S3 中存储的数据。也支持导入其他支持S3协议的对象存储系统导入,如果百度云的BOS,阿里云的OSS和腾讯云的COS等、 +下面主要介绍如何导入 AWS S3 中存储的数据。也支持导入其他支持S3协议的对象存储系统导入。 ### 适用场景 diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-operate/import/import-way/binlog-load-manual.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-operate/import/import-way/binlog-load-manual.md index bd9ec5cc4e1..9d9b63a1a61 100644 --- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-operate/import/import-way/binlog-load-manual.md +++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-operate/import/import-way/binlog-load-manual.md @@ -464,10 +464,6 @@ binlog_desc 用户可以通过 STOP/PAUSE/RESUME 三个命令来控制作业的停止,暂停和恢复。可以通过 [STOP SYNC JOB](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/STOP-SYNC-JOB.md) ; [PAUSE SYNC JOB](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/PAUSE-SYNC-JOB.md); 以及 [RESUME SYNC JOB](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/RESUME-SYNC-JOB.md); -## 案例实战 - -[Apache Doris Binlog Load使用方法及示例](https://doris.apache.org/zh-CN/article/articles/doris-binlog-load.md) - ## 相关参数 ### CANAL配置 diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-operate/import/import-way/broker-load-manual.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-operate/import/import-way/broker-load-manual.md index feb30e39be6..3aade30ce11 100644 --- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-operate/import/import-way/broker-load-manual.md +++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-operate/import/import-way/broker-load-manual.md @@ -28,6 +28,8 @@ under the License. Broker load 是一个异步的导入方式,支持的数据源取决于 [Broker](../../../advanced/broker.md) 进程支持的数据源。 +因为 Doris 表里的数据是有序的,所以 Broker load 在导入数据的时是要利用doris 集群资源对数据进行排序,想对于 Spark load 来完成海量历史数据迁移,对 Doris 的集群资源占用要比较大,这种方式是在用户没有 Spark 这种计算资源的情况下使用,如果有 Spark 计算资源建议使用 [Spark load](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/SPARK-LOAD.md)。 + 用户需要通过 MySQL 协议 创建 [Broker load](../../../sql-manual/sql-reference/Data-Manipulation-Statements/Load/BROKER-LOAD.md) 导入,并通过查看导入命令检查导入结果。 ## 适用场景 diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-operate/import/import-way/spark-load-manual.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-operate/import/import-way/spark-load-manual.md index 1afc0970f8f..b6269ea176d 100644 --- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-operate/import/import-way/spark-load-manual.md +++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-operate/import/import-way/spark-load-manual.md @@ -28,6 +28,10 @@ under the License. Spark load 通过外部的 Spark 资源实现对导入数据的预处理,提高 Doris 大数据量的导入性能并且节省 Doris 集群的计算资源。主要用于初次迁移,大数据量导入 Doris 的场景。 + Spark load 是利用了 spark 集群的资源对要导入的数据的进行了排序,Doris be 直接写文件,这样能大大降低 Doris 集群的资源使用,对于历史海量数据迁移降低 Doris 集群资源使用及负载有很好的效果。 + +如果用户在没有 Spark 集群这种资源的情况下,又想方便、快速的完成外部存储历史数据的迁移,可以使用 [Broker load](./BROKER-LOAD.md) 。相对 Spark load 导入 Broker load 对 Doris 集群的资源占用会更高。 + Spark load 是一种异步导入方式,用户需要通过 MySQL 协议创建 Spark 类型导入任务,并通过 `SHOW LOAD` 查看导入结果。 ## 适用场景 diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-table/basic-usage.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-table/basic-usage.md index fe9ba6cd824..36a1632a3f3 100644 --- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-table/basic-usage.md +++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-table/basic-usage.md @@ -32,7 +32,19 @@ Doris 采用 MySQL 协议进行通信,用户可通过 MySQL client 或者 MySQ ### Root用户登录与密码修改 -Doris 内置 root 和 admin 用户,密码默认都为空。启动完 Doris 程序之后,可以通过 root 或 admin 用户连接到 Doris 集群。 使用下面命令即可登录 Doris,登录后进入到Doris对应的Mysql命令行操作界面: +Doris 内置 root 和 admin 用户,密码默认都为空。 + +>备注: +> +>Doris 提供的默认 root 和 admin 用户是管理员用户 +> +>root 用户默认拥有集群所有权限。同时拥有 Grant_priv 和 Node_priv 的用户,可以将该权限赋予其他用户,拥有节点变更权限,包括 FE、BE、BROKER 节点的添加、删除、下线等操作。 +> +>admin用户拥有 ADMIN_PRIV 和 GRANT_PRIV 权限 +> +>关于权限这块的具体说明可以参照[权限管理](/docs/admin-manual/privilege-ldap/user-privilege) + +启动完 Doris 程序之后,可以通过 root 或 admin 用户连接到 Doris 集群。 使用下面命令即可登录 Doris,登录后进入到Doris对应的Mysql命令行操作界面: ```bash [root@doris ~]# mysql -h FE_HOST -P9030 -uroot diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/faq/install-faq.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/faq/install-faq.md index 13d37b342b7..bdace4f478b 100644 --- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/faq/install-faq.md +++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/faq/install-faq.md @@ -57,7 +57,7 @@ priorty_network 的值是 CIDR 格式表示的。分为两部分,第一部分 Observer 角色和这个单词的含义一样,仅仅作为观察者来同步已经成功写入的元数据日志,并且提供元数据读服务。他不会参与多数写的逻辑。 -通常情况下,可以部署 1 Follower + 2 Observer 或者 3 Follower + N Observer。前者运维简单,几乎不会出现 Follower 之间的一致性协议导致这种复杂错误情况(百度内部集群大多使用这种方式)。后者可以保证元数据写的高可用,如果是高并发查询场景,可以适当增加 Observer。 +通常情况下,可以部署 1 Follower + 2 Observer 或者 3 Follower + N Observer。前者运维简单,几乎不会出现 Follower 之间的一致性协议导致这种复杂错误情况(企业大多使用这种方式)。后者可以保证元数据写的高可用,如果是高并发查询场景,可以适当增加 Observer。 ### Q4. 节点新增加了新的磁盘,为什么数据没有均衡到新的磁盘上? diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/install/install-deploy.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/install/install-deploy.md index 93cb298687a..6c9f7d24d2e 100644 --- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/install/install-deploy.md +++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/install/install-deploy.md @@ -247,7 +247,7 @@ doris默认为表名大小写敏感,如有表名大小写不敏感的需求需 #### (可选)FS_Broker 部署 -Broker 以插件的形式,独立于 Doris 部署。如果需要从第三方存储系统导入数据,需要部署相应的 Broker,默认提供了读取 HDFS 、百度云 BOS 及 Amazon S3 的 fs_broker。fs_broker 是无状态的,建议每一个 FE 和 BE 节点都部署一个 Broker。 +Broker 以插件的形式,独立于 Doris 部署。如果需要从第三方存储系统导入数据,需要部署相应的 Broker,默认提供了读取 HDFS 、对象存储的 fs_broker。fs_broker 是无状态的,建议每一个 FE 和 BE 节点都部署一个 Broker。 * 拷贝源码 fs_broker 的 output 目录下的相应 Broker 目录到需要部署的所有节点上。建议和 BE 或者 FE 目录保持同级。 @@ -364,6 +364,6 @@ Broker 以插件的形式,独立于 Doris 部署。如果需要从第三方存 ```shell vim /etc/supervisord.conf - + minfds=65535 ; (min. avail startup file descriptors;default 1024) ``` diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/sql-manual/sql-reference/Account-Management-Statements/CREATE-USER.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/sql-manual/sql-reference/Account-Management-Statements/CREATE-USER.md index 983b566c6f4..fb771aa260f 100644 --- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/sql-manual/sql-reference/Account-Management-Statements/CREATE-USER.md +++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/sql-manual/sql-reference/Account-Management-Statements/CREATE-USER.md @@ -43,7 +43,7 @@ CREATE USER user_identity [IDENTIFIED BY 'password'] [DEFAULT ROLE 'role_name'] 在 Doris 中,一个 user_identity 唯一标识一个用户。user_identity 由两部分组成,user_name 和 host,其中 username 为用户名。host 标识用户端连接所在的主机地址。host 部分可以使用 % 进行模糊匹配。如果不指定 host,默认为 '%',即表示该用户可以从任意 host 连接到 Doris。 -host 部分也可指定为 domain,语法为:'user_name'@['domain'],即使用中括号包围,则 Doris 会认为这个是一个 domain,并尝试解析其 ip 地址。目前仅支持百度内部的 BNS 解析。 +host 部分也可指定为 domain,语法为:'user_name'@['domain'],即使用中括号包围,则 Doris 会认为这个是一个 domain,并尝试解析其 ip 地址。 如果指定了角色(ROLE),则会自动将该角色所拥有的权限赋予新创建的这个用户。如果不指定,则该用户默认没有任何权限。指定的 ROLE 必须已经存在。 --------------------------------------------------------------------- To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org For additional commands, e-mail: commits-h...@doris.apache.org