morningman commented on code in PR #16160:
URL: https://github.com/apache/doris/pull/16160#discussion_r1089690156


##########
docs/en/docs/lakehouse/multi-catalog/hive.md:
##########
@@ -26,4 +26,149 @@ under the License.
 
 # Hive
 
-TODO: translate
+Once Doris is connected to Hive Metastore or made compatible with Hive 
Metastore metadata service, it can access databases and tables in Hive and 
conduct queries.
+
+Besides Hive, many other systems, such as Iceberg and Hudi, use Hive Metastore 
to keep their metadata. Thus, Doris can also access these systems via Hive 
Catalog. 
+
+## Usage
+
+When connnecting to Hive, Doris:
+
+1. Supports Hive version 1/2/3;
+2. Supports both Managed Table and External Table;
+3. Can identify metadata of Hive, Iceberg, and Hudi stored in Hive Metastore;
+4. Supports Hive tables with data stored in JuiceFS, which can be used the 
same way as normal Hive tables (put `juicefs-hadoop-x.x.x.jar` in `fe/lib/` and 
`apache_hdfs_broker/lib/`).
+
+## Create Catalog
+
+```sql
+CREATE CATALOG hive PROPERTIES (
+    'type'='hms',
+    'hive.metastore.uris' = 'thrift://172.21.0.1:7004',
+    'hadoop.username' = 'hive',
+    'dfs.nameservices'='your-nameservice',
+    'dfs.ha.namenodes.your-nameservice'='nn1,nn2',
+    'dfs.namenode.rpc-address.your-nameservice.nn1'='172.21.0.2:4007',
+    'dfs.namenode.rpc-address.your-nameservice.nn2'='172.21.0.3:4007',
+    
'dfs.client.failover.proxy.provider.your-nameservice'='org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider'
+);
+```
+
+ In addition to `type` and  `hive.metastore.uris` , which are required, you 
can specify other parameters regarding the connection.
+       
+For example, to specify HDFS HA:
+
+```sql
+CREATE CATALOG hive PROPERTIES (
+    'type'='hms',
+    'hive.metastore.uris' = 'thrift://172.21.0.1:7004',
+    'hadoop.username' = 'hive',
+    'dfs.nameservices'='your-nameservice',
+    'dfs.ha.namenodes.your-nameservice'='nn1,nn2',
+    'dfs.namenode.rpc-address.your-nameservice.nn1'='172.21.0.2:4007',
+    'dfs.namenode.rpc-address.your-nameservice.nn2'='172.21.0.3:4007',
+    
'dfs.client.failover.proxy.provider.your-nameservice'='org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider'
+);
+```
+
+To specify HDFS HA and Kerberos authentication information:
+
+```sql
+CREATE CATALOG hive PROPERTIES (
+    'type'='hms',
+    'hive.metastore.uris' = 'thrift://172.21.0.1:7004',
+    'hive.metastore.sasl.enabled' = 'true',
+    'dfs.nameservices'='your-nameservice',
+    'dfs.namenode.rpc-address.your-nameservice.nn1'='172.21.0.2:4007',
+    'dfs.namenode.rpc-address.your-nameservice.nn2'='172.21.0.3:4007',
+    
'dfs.client.failover.proxy.provider.your-nameservice'='org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider',
+    'hadoop.security.authentication' = 'kerberos',
+    'hadoop.kerberos.keytab' = '/your-keytab-filepath/your.keytab',   
+    'hadoop.kerberos.principal' = 'your-princi...@your.com',
+    'yarn.resourcemanager.address' = 'your-rm-address:your-rm-port',    
+    'yarn.resourcemanager.principal' = 'your-rm-principal/_h...@your.com'
+);
+```
+
+To provide Hadoop KMS encrypted transmission information:
+
+```sql
+CREATE CATALOG hive PROPERTIES (
+    'type'='hms',
+    'hive.metastore.uris' = 'thrift://172.21.0.1:7004',
+    'dfs.encryption.key.provider.uri' = 'kms://http@kms_host:kms_port/kms'
+);
+```
+
+Or to connect to Hive data stored in JuiceFS:
+
+```sql
+CREATE CATALOG hive PROPERTIES (
+    'type'='hms',
+    'hive.metastore.uris' = 'thrift://172.21.0.1:7004',
+    'hadoop.username' = 'root',
+    'fs.jfs.impl' = 'io.juicefs.JuiceFileSystem',
+    'fs.AbstractFileSystem.jfs.impl' = 'io.juicefs.JuiceFS',
+    'juicefs.meta' = 'xxx'
+);
+```
+
+In Doris 1.2.1 and newer, you can create a Resource that contains all these 
parameters, and reuse the Resource when creating new Catalogs. Here is an 
example:
+
+```sql
+# 1. Create Resource
+CREATE RESOURCE hms_resource PROPERTIES (
+    'type'='hms',
+    'hive.metastore.uris' = 'thrift://172.21.0.1:7004',
+    'hadoop.username' = 'hive',
+    'dfs.nameservices'='your-nameservice',
+    'dfs.ha.namenodes.your-nameservice'='nn1,nn2',
+    'dfs.namenode.rpc-address.your-nameservice.nn1'='172.21.0.2:4007',
+    'dfs.namenode.rpc-address.your-nameservice.nn2'='172.21.0.3:4007',
+    
'dfs.client.failover.proxy.provider.your-nameservice'='org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider'
+);
+       
+# 2. Create Catalog and use an existing Resource. The key and value 
information in the followings will overwrite the corresponding information in 
the Resource.
+CREATE CATALOG hive WITH RESOURCE hms_resource PROPERTIES(
+       'key' = 'value'

Review Comment:
   Please use 4 spaces to replace all tabs



##########
docs/en/docs/lakehouse/multi-catalog/hive.md:
##########
@@ -26,4 +26,149 @@ under the License.
 
 # Hive
 
-TODO: translate
+Once Doris is connected to Hive Metastore or made compatible with Hive 
Metastore metadata service, it can access databases and tables in Hive and 
conduct queries.
+
+Besides Hive, many other systems, such as Iceberg and Hudi, use Hive Metastore 
to keep their metadata. Thus, Doris can also access these systems via Hive 
Catalog. 
+
+## Usage
+
+When connnecting to Hive, Doris:
+
+1. Supports Hive version 1/2/3;
+2. Supports both Managed Table and External Table;
+3. Can identify metadata of Hive, Iceberg, and Hudi stored in Hive Metastore;
+4. Supports Hive tables with data stored in JuiceFS, which can be used the 
same way as normal Hive tables (put `juicefs-hadoop-x.x.x.jar` in `fe/lib/` and 
`apache_hdfs_broker/lib/`).
+
+## Create Catalog
+
+```sql
+CREATE CATALOG hive PROPERTIES (
+    'type'='hms',
+    'hive.metastore.uris' = 'thrift://172.21.0.1:7004',
+    'hadoop.username' = 'hive',
+    'dfs.nameservices'='your-nameservice',
+    'dfs.ha.namenodes.your-nameservice'='nn1,nn2',
+    'dfs.namenode.rpc-address.your-nameservice.nn1'='172.21.0.2:4007',
+    'dfs.namenode.rpc-address.your-nameservice.nn2'='172.21.0.3:4007',
+    
'dfs.client.failover.proxy.provider.your-nameservice'='org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider'
+);
+```
+
+ In addition to `type` and  `hive.metastore.uris` , which are required, you 
can specify other parameters regarding the connection.
+       
+For example, to specify HDFS HA:
+
+```sql
+CREATE CATALOG hive PROPERTIES (
+    'type'='hms',
+    'hive.metastore.uris' = 'thrift://172.21.0.1:7004',
+    'hadoop.username' = 'hive',
+    'dfs.nameservices'='your-nameservice',
+    'dfs.ha.namenodes.your-nameservice'='nn1,nn2',
+    'dfs.namenode.rpc-address.your-nameservice.nn1'='172.21.0.2:4007',
+    'dfs.namenode.rpc-address.your-nameservice.nn2'='172.21.0.3:4007',
+    
'dfs.client.failover.proxy.provider.your-nameservice'='org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider'
+);
+```
+
+To specify HDFS HA and Kerberos authentication information:
+
+```sql
+CREATE CATALOG hive PROPERTIES (
+    'type'='hms',
+    'hive.metastore.uris' = 'thrift://172.21.0.1:7004',
+    'hive.metastore.sasl.enabled' = 'true',
+    'dfs.nameservices'='your-nameservice',
+    'dfs.namenode.rpc-address.your-nameservice.nn1'='172.21.0.2:4007',
+    'dfs.namenode.rpc-address.your-nameservice.nn2'='172.21.0.3:4007',
+    
'dfs.client.failover.proxy.provider.your-nameservice'='org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider',
+    'hadoop.security.authentication' = 'kerberos',
+    'hadoop.kerberos.keytab' = '/your-keytab-filepath/your.keytab',   
+    'hadoop.kerberos.principal' = 'your-princi...@your.com',
+    'yarn.resourcemanager.address' = 'your-rm-address:your-rm-port',    
+    'yarn.resourcemanager.principal' = 'your-rm-principal/_h...@your.com'
+);
+```
+
+To provide Hadoop KMS encrypted transmission information:
+
+```sql
+CREATE CATALOG hive PROPERTIES (
+    'type'='hms',
+    'hive.metastore.uris' = 'thrift://172.21.0.1:7004',
+    'dfs.encryption.key.provider.uri' = 'kms://http@kms_host:kms_port/kms'
+);
+```
+
+Or to connect to Hive data stored in JuiceFS:
+
+```sql
+CREATE CATALOG hive PROPERTIES (
+    'type'='hms',
+    'hive.metastore.uris' = 'thrift://172.21.0.1:7004',
+    'hadoop.username' = 'root',
+    'fs.jfs.impl' = 'io.juicefs.JuiceFileSystem',
+    'fs.AbstractFileSystem.jfs.impl' = 'io.juicefs.JuiceFS',
+    'juicefs.meta' = 'xxx'
+);
+```
+
+In Doris 1.2.1 and newer, you can create a Resource that contains all these 
parameters, and reuse the Resource when creating new Catalogs. Here is an 
example:
+
+```sql
+# 1. Create Resource
+CREATE RESOURCE hms_resource PROPERTIES (
+    'type'='hms',
+    'hive.metastore.uris' = 'thrift://172.21.0.1:7004',
+    'hadoop.username' = 'hive',
+    'dfs.nameservices'='your-nameservice',
+    'dfs.ha.namenodes.your-nameservice'='nn1,nn2',
+    'dfs.namenode.rpc-address.your-nameservice.nn1'='172.21.0.2:4007',
+    'dfs.namenode.rpc-address.your-nameservice.nn2'='172.21.0.3:4007',
+    
'dfs.client.failover.proxy.provider.your-nameservice'='org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider'
+);
+       
+# 2. Create Catalog and use an existing Resource. The key and value 
information in the followings will overwrite the corresponding information in 
the Resource.
+CREATE CATALOG hive WITH RESOURCE hms_resource PROPERTIES(
+       'key' = 'value'

Review Comment:
   Please use 4 spaces to replace all tabs



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org
For additional commands, e-mail: commits-h...@doris.apache.org

Reply via email to