morningman commented on code in PR #16160:
URL: https://github.com/apache/doris/pull/16160#discussion_r1089690101


##########
docs/en/docs/lakehouse/multi-catalog/multi-catalog.md:
##########
@@ -261,7 +261,7 @@ See [Hudi](./hudi)
 
 ### Connect to Elasticsearch
 
-See [Elasticsearch](./elasticsearch)
+See [Elasticsearch](./es)

Review Comment:
   Modify the Chinese version too



##########
docs/en/docs/lakehouse/multi-catalog/dlf.md:
##########
@@ -25,7 +25,79 @@ under the License.
 -->
 
 
-# Aliyun DLF
+# Alibaba Cloud DLF
+
+Data Lake Formation (DLF) is the unified metadata management service of 
Alibaba Cloud. It is compatible with the Hive Metastore protocol.
+
+> [What is DLF](https://www.alibabacloud.com/product/datalake-formation)
+
+Doris can access DLF the same way as it accesses Hive Metastore.
+
+## Connect to DLF
+
+1. Create `hive-site.xml`
+
+   Create the  `hive-site.xml` file, and put it in the `fe/conf`  directory.
+
+   ```
+   <?xml version="1.0"?>
+   <configuration>
+       <!--Set to use dlf client-->
+       <property>
+           <name>hive.metastore.type</name>
+           <value>dlf</value>
+       </property>
+       <property>
+           <name>dlf.catalog.endpoint</name>
+           <value>dlf-vpc.cn-beijing.aliyuncs.com</value>
+       </property>
+       <property>
+           <name>dlf.catalog.region</name>
+           <value>cn-beijing</value>
+       </property>
+       <property>
+           <name>dlf.catalog.proxyMode</name>
+           <value>DLF_ONLY</value>
+       </property>
+       <property>
+           <name>dlf.catalog.uid</name>
+           <value>20000000000000000</value>
+       </property>
+       <property>
+           <name>dlf.catalog.accessKeyId</name>
+           <value>XXXXXXXXXXXXXXX</value>
+       </property>
+       <property>
+           <name>dlf.catalog.accessKeySecret</name>
+           <value>XXXXXXXXXXXXXXXXX</value>
+       </property>
+   </configuration>
+   ```
+
+   * `dlf.catalog.endpoint`: DLF Endpoint. See [Regions and Endpoints of 
DLF](https://www.alibabacloud.com/help/en/data-lake-formation/latest/regions-and-endpoints).
+   * `dlf.catalog.region`: DLF Region. See [Regions and Endpoints of 
DLF](https://www.alibabacloud.com/help/en/data-lake-formation/latest/regions-and-endpoints).
+   * `dlf.catalog.uid`: Alibaba Cloud account. You can find the "Account ID" 
in the upper right corner on the Alibaba Cloud console. 
+   * `dlf.catalog.accessKeyId`:AccessKey, which you can create and manage on 
the [Alibaba Cloud console](https://ram.console.aliyun.com/manage/ak).
+   * `dlf.catalog.accessKeySecret`:SecretKey, which you can create and manage 
on the [Alibaba Cloud console](https://ram.console.aliyun.com/manage/ak).
+
+   Other configuration items are fixed and require no  modifications.

Review Comment:
   ```suggestion
      Other configuration items are fixed and require no modifications.
   ```



##########
docs/en/docs/lakehouse/multi-catalog/hive.md:
##########
@@ -26,4 +26,149 @@ under the License.
 
 # Hive
 
-TODO: translate
+Once Doris is connected to Hive Metastore or made compatible with Hive 
Metastore metadata service, it can access databases and tables in Hive and 
conduct queries.
+
+Besides Hive, many other systems, such as Iceberg and Hudi, use Hive Metastore 
to keep their metadata. Thus, Doris can also access these systems via Hive 
Catalog. 
+
+## Usage
+
+When connnecting to Hive, Doris:
+
+1. Supports Hive version 1/2/3;
+2. Supports both Managed Table and External Table;
+3. Can identify metadata of Hive, Iceberg, and Hudi stored in Hive Metastore;
+4. Supports Hive tables with data stored in JuiceFS, which can be used the 
same way as normal Hive tables (put `juicefs-hadoop-x.x.x.jar` in `fe/lib/` and 
`apache_hdfs_broker/lib/`).
+
+## Create Catalog
+
+```sql
+CREATE CATALOG hive PROPERTIES (
+    'type'='hms',
+    'hive.metastore.uris' = 'thrift://172.21.0.1:7004',
+    'hadoop.username' = 'hive',
+    'dfs.nameservices'='your-nameservice',
+    'dfs.ha.namenodes.your-nameservice'='nn1,nn2',
+    'dfs.namenode.rpc-address.your-nameservice.nn1'='172.21.0.2:4007',
+    'dfs.namenode.rpc-address.your-nameservice.nn2'='172.21.0.3:4007',
+    
'dfs.client.failover.proxy.provider.your-nameservice'='org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider'
+);
+```
+
+ In addition to `type` and  `hive.metastore.uris` , which are required, you 
can specify other parameters regarding the connection.
+       
+For example, to specify HDFS HA:
+
+```sql
+CREATE CATALOG hive PROPERTIES (
+    'type'='hms',
+    'hive.metastore.uris' = 'thrift://172.21.0.1:7004',
+    'hadoop.username' = 'hive',
+    'dfs.nameservices'='your-nameservice',
+    'dfs.ha.namenodes.your-nameservice'='nn1,nn2',
+    'dfs.namenode.rpc-address.your-nameservice.nn1'='172.21.0.2:4007',
+    'dfs.namenode.rpc-address.your-nameservice.nn2'='172.21.0.3:4007',
+    
'dfs.client.failover.proxy.provider.your-nameservice'='org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider'
+);
+```
+
+To specify HDFS HA and Kerberos authentication information:
+
+```sql
+CREATE CATALOG hive PROPERTIES (
+    'type'='hms',
+    'hive.metastore.uris' = 'thrift://172.21.0.1:7004',
+    'hive.metastore.sasl.enabled' = 'true',
+    'dfs.nameservices'='your-nameservice',
+    'dfs.namenode.rpc-address.your-nameservice.nn1'='172.21.0.2:4007',
+    'dfs.namenode.rpc-address.your-nameservice.nn2'='172.21.0.3:4007',
+    
'dfs.client.failover.proxy.provider.your-nameservice'='org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider',
+    'hadoop.security.authentication' = 'kerberos',
+    'hadoop.kerberos.keytab' = '/your-keytab-filepath/your.keytab',   
+    'hadoop.kerberos.principal' = 'your-princi...@your.com',
+    'yarn.resourcemanager.address' = 'your-rm-address:your-rm-port',    
+    'yarn.resourcemanager.principal' = 'your-rm-principal/_h...@your.com'
+);
+```
+
+To provide Hadoop KMS encrypted transmission information:
+
+```sql
+CREATE CATALOG hive PROPERTIES (
+    'type'='hms',
+    'hive.metastore.uris' = 'thrift://172.21.0.1:7004',
+    'dfs.encryption.key.provider.uri' = 'kms://http@kms_host:kms_port/kms'
+);
+```
+
+Or to connect to Hive data stored in JuiceFS:
+
+```sql
+CREATE CATALOG hive PROPERTIES (
+    'type'='hms',
+    'hive.metastore.uris' = 'thrift://172.21.0.1:7004',
+    'hadoop.username' = 'root',
+    'fs.jfs.impl' = 'io.juicefs.JuiceFileSystem',
+    'fs.AbstractFileSystem.jfs.impl' = 'io.juicefs.JuiceFS',
+    'juicefs.meta' = 'xxx'
+);
+```
+
+In Doris 1.2.1 and newer, you can create a Resource that contains all these 
parameters, and reuse the Resource when creating new Catalogs. Here is an 
example:
+
+```sql
+# 1. Create Resource
+CREATE RESOURCE hms_resource PROPERTIES (
+    'type'='hms',
+    'hive.metastore.uris' = 'thrift://172.21.0.1:7004',
+    'hadoop.username' = 'hive',
+    'dfs.nameservices'='your-nameservice',
+    'dfs.ha.namenodes.your-nameservice'='nn1,nn2',
+    'dfs.namenode.rpc-address.your-nameservice.nn1'='172.21.0.2:4007',
+    'dfs.namenode.rpc-address.your-nameservice.nn2'='172.21.0.3:4007',
+    
'dfs.client.failover.proxy.provider.your-nameservice'='org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider'
+);
+       
+# 2. Create Catalog and use an existing Resource. The key and value 
information in the followings will overwrite the corresponding information in 
the Resource.
+CREATE CATALOG hive WITH RESOURCE hms_resource PROPERTIES(
+       'key' = 'value'

Review Comment:
   ```suggestion
       'key' = 'value'
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org
For additional commands, e-mail: commits-h...@doris.apache.org

Reply via email to