kaijchen commented on code in PR #21966:
URL: https://github.com/apache/doris/pull/21966#discussion_r1270261862


##########
docs/en/docs/lakehouse/multi-catalog/paimon.md:
##########
@@ -30,31 +30,76 @@ under the License.
 <version since="dev">
 </version>
 
-## Usage
+## Instructions for use
 
-1. Currently, Doris only supports simple field types.
-2. Doris only supports Hive Metastore Catalogs currently. The usage is 
basically the same as that of Hive Catalogs. More types of Catalogs will be 
supported in future versions.
+1. When data in hdfs,need to put core-site.xml, hdfs-site.xml and 
hive-site.xml in the conf directory of FE and BE. First read the hadoop 
configuration file in the conf directory, and then read the related to the 
environment variable `HADOOP_CONF_DIR` configuration file.
+2. The currently adapted version of the payment is 0.4.0
 
 ## Create Catalog
 
-### Create Catalog Based on Paimon API
+Paimon Catalog Currently supports two types of Metastore creation catalogs:
+* filesystem(default),Store both metadata and data in the file system.
+* hive metastore,It also stores metadata in Hive metastore. Users can access 
these tables directly from Hive.
 
-Use the Paimon API to access metadata.Currently, only support Hive service as 
Paimon's Catalog.
+### Creating a Catalog Based on FileSystem
 
-- Hive Metastore
+#### HDFS
+```sql
+ CREATE CATALOG `paimon_hdfs` PROPERTIES (
+    "type" = "paimon",
+    "warehouse" = "hdfs://HDFS8000871/user/paimon",
+    "dfs.nameservices"="HDFS8000871",
+    "dfs.ha.namenodes.HDFS8000871"="nn1,nn2",
+    "dfs.namenode.rpc-address.HDFS8000871.nn1"="172.21.0.1:4007",
+    "dfs.namenode.rpc-address.HDFS8000871.nn2"="172.21.0.2:4007",
+    
"dfs.client.failover.proxy.provider.HDFS8000871"="org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider",
+    "hadoop.username"="hadoop"
+);
+
+```
+
+#### S3
 
 ```sql
-CREATE CATALOG `paimon` PROPERTIES (
+  CREATE CATALOG `paimon_s3` PROPERTIES (

Review Comment:
   ```suggestion
   CREATE CATALOG `paimon_s3` PROPERTIES (
   ```



##########
docs/en/docs/lakehouse/multi-catalog/paimon.md:
##########
@@ -30,31 +30,76 @@ under the License.
 <version since="dev">
 </version>
 
-## Usage
+## Instructions for use
 
-1. Currently, Doris only supports simple field types.
-2. Doris only supports Hive Metastore Catalogs currently. The usage is 
basically the same as that of Hive Catalogs. More types of Catalogs will be 
supported in future versions.
+1. When data in hdfs,need to put core-site.xml, hdfs-site.xml and 
hive-site.xml in the conf directory of FE and BE. First read the hadoop 
configuration file in the conf directory, and then read the related to the 
environment variable `HADOOP_CONF_DIR` configuration file.
+2. The currently adapted version of the payment is 0.4.0
 
 ## Create Catalog
 
-### Create Catalog Based on Paimon API
+Paimon Catalog Currently supports two types of Metastore creation catalogs:
+* filesystem(default),Store both metadata and data in the file system.
+* hive metastore,It also stores metadata in Hive metastore. Users can access 
these tables directly from Hive.
 
-Use the Paimon API to access metadata.Currently, only support Hive service as 
Paimon's Catalog.
+### Creating a Catalog Based on FileSystem
 
-- Hive Metastore
+#### HDFS
+```sql
+ CREATE CATALOG `paimon_hdfs` PROPERTIES (
+    "type" = "paimon",
+    "warehouse" = "hdfs://HDFS8000871/user/paimon",
+    "dfs.nameservices"="HDFS8000871",
+    "dfs.ha.namenodes.HDFS8000871"="nn1,nn2",
+    "dfs.namenode.rpc-address.HDFS8000871.nn1"="172.21.0.1:4007",
+    "dfs.namenode.rpc-address.HDFS8000871.nn2"="172.21.0.2:4007",
+    
"dfs.client.failover.proxy.provider.HDFS8000871"="org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider",
+    "hadoop.username"="hadoop"
+);
+
+```
+
+#### S3
 
 ```sql
-CREATE CATALOG `paimon` PROPERTIES (
+  CREATE CATALOG `paimon_s3` PROPERTIES (
     "type" = "paimon",
-    "hive.metastore.uris" = "thrift://172.16.65.15:7004",
-    "dfs.ha.namenodes.HDFS1006531" = "nn2,nn1",
-    "dfs.namenode.rpc-address.HDFS1006531.nn2" = "172.16.65.115:4007",
-    "dfs.namenode.rpc-address.HDFS1006531.nn1" = "172.16.65.15:4007",
-    "dfs.nameservices" = "HDFS1006531",
-    "hadoop.username" = "hadoop",
-    "warehouse" = "hdfs://HDFS1006531/data/paimon",
-    "dfs.client.failover.proxy.provider.HDFS1006531" = 
"org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider"
+    "warehouse" = 
"s3://paimon-1308700295.cos.ap-beijing.myqcloud.com/paimoncos",
+    "s3.endpoint"="cos.ap-beijing.myqcloud.com",
+    "s3.access_key"="ak",
+    "s3.secret_key"="sk"
 );
+
+```
+
+#### OSS
+
+```sql
+   CREATE CATALOG `paimon_oss` PROPERTIES (

Review Comment:
   ```suggestion
   CREATE CATALOG `paimon_oss` PROPERTIES (
   ```



##########
docs/zh-CN/docs/lakehouse/multi-catalog/paimon.md:
##########
@@ -30,31 +30,76 @@ under the License.
 <version since="dev">
 </version>
 
-## 使用限制
+## 使用须知
 
-1. 目前只支持简单字段类型。
-2. 目前仅支持 Hive Metastore 类型的 Catalog。所以使用方式和 Hive Catalog 基本一致。后续版本将支持其他类型的 
Catalog。
+1. 数据放在hdfs时,需要将 core-site.xml,hdfs-site.xml 和 hive-site.xml  放到 FE 和 BE 的 
conf 目录下。优先读取 conf 目录下的 hadoop 配置文件,再读取环境变量 `HADOOP_CONF_DIR` 的相关配置文件。
+2. 当前适配的paimon版本为0.4.0
 
 ## 创建 Catalog
 
-### 基于Paimon API创建Catalog
+Paimon Catalog 当前支持两种类型的Metastore创建Catalog:
+* filesystem(默认),同时存储元数据和数据在filesystem。
+* hive metastore,它还将元数据存储在Hive metastore中。用户可以直接从Hive访问这些表。
 
-使用Paimon API访问元数据的方式,目前只支持Hive服务作为Paimon的Catalog。
+### 基于FileSystem创建Catalog
 
-- Hive Metastore 作为元数据服务
+#### HDFS
+```sql
+ CREATE CATALOG `paimon_hdfs` PROPERTIES (
+    "type" = "paimon",
+    "warehouse" = "hdfs://HDFS8000871/user/paimon",
+    "dfs.nameservices"="HDFS8000871",
+    "dfs.ha.namenodes.HDFS8000871"="nn1,nn2",
+    "dfs.namenode.rpc-address.HDFS8000871.nn1"="172.21.0.1:4007",
+    "dfs.namenode.rpc-address.HDFS8000871.nn2"="172.21.0.2:4007",
+    
"dfs.client.failover.proxy.provider.HDFS8000871"="org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider",
+    "hadoop.username"="hadoop"
+);
+
+```
+
+#### S3
 
 ```sql
-CREATE CATALOG `paimon` PROPERTIES (
+  CREATE CATALOG `paimon_s3` PROPERTIES (

Review Comment:
   ```suggestion
   CREATE CATALOG `paimon_s3` PROPERTIES (
   ```



##########
docs/zh-CN/docs/lakehouse/multi-catalog/paimon.md:
##########
@@ -30,31 +30,76 @@ under the License.
 <version since="dev">
 </version>
 
-## 使用限制
+## 使用须知
 
-1. 目前只支持简单字段类型。
-2. 目前仅支持 Hive Metastore 类型的 Catalog。所以使用方式和 Hive Catalog 基本一致。后续版本将支持其他类型的 
Catalog。
+1. 数据放在hdfs时,需要将 core-site.xml,hdfs-site.xml 和 hive-site.xml  放到 FE 和 BE 的 
conf 目录下。优先读取 conf 目录下的 hadoop 配置文件,再读取环境变量 `HADOOP_CONF_DIR` 的相关配置文件。
+2. 当前适配的paimon版本为0.4.0
 
 ## 创建 Catalog
 
-### 基于Paimon API创建Catalog
+Paimon Catalog 当前支持两种类型的Metastore创建Catalog:
+* filesystem(默认),同时存储元数据和数据在filesystem。
+* hive metastore,它还将元数据存储在Hive metastore中。用户可以直接从Hive访问这些表。
 
-使用Paimon API访问元数据的方式,目前只支持Hive服务作为Paimon的Catalog。
+### 基于FileSystem创建Catalog
 
-- Hive Metastore 作为元数据服务
+#### HDFS
+```sql
+ CREATE CATALOG `paimon_hdfs` PROPERTIES (

Review Comment:
   ```suggestion
   CREATE CATALOG `paimon_hdfs` PROPERTIES (
   ```



##########
docs/zh-CN/docs/lakehouse/multi-catalog/paimon.md:
##########
@@ -30,31 +30,76 @@ under the License.
 <version since="dev">
 </version>
 
-## 使用限制
+## 使用须知
 
-1. 目前只支持简单字段类型。
-2. 目前仅支持 Hive Metastore 类型的 Catalog。所以使用方式和 Hive Catalog 基本一致。后续版本将支持其他类型的 
Catalog。
+1. 数据放在hdfs时,需要将 core-site.xml,hdfs-site.xml 和 hive-site.xml  放到 FE 和 BE 的 
conf 目录下。优先读取 conf 目录下的 hadoop 配置文件,再读取环境变量 `HADOOP_CONF_DIR` 的相关配置文件。
+2. 当前适配的paimon版本为0.4.0
 
 ## 创建 Catalog
 
-### 基于Paimon API创建Catalog
+Paimon Catalog 当前支持两种类型的Metastore创建Catalog:
+* filesystem(默认),同时存储元数据和数据在filesystem。
+* hive metastore,它还将元数据存储在Hive metastore中。用户可以直接从Hive访问这些表。
 
-使用Paimon API访问元数据的方式,目前只支持Hive服务作为Paimon的Catalog。
+### 基于FileSystem创建Catalog
 
-- Hive Metastore 作为元数据服务
+#### HDFS
+```sql
+ CREATE CATALOG `paimon_hdfs` PROPERTIES (
+    "type" = "paimon",
+    "warehouse" = "hdfs://HDFS8000871/user/paimon",
+    "dfs.nameservices"="HDFS8000871",
+    "dfs.ha.namenodes.HDFS8000871"="nn1,nn2",
+    "dfs.namenode.rpc-address.HDFS8000871.nn1"="172.21.0.1:4007",
+    "dfs.namenode.rpc-address.HDFS8000871.nn2"="172.21.0.2:4007",
+    
"dfs.client.failover.proxy.provider.HDFS8000871"="org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider",
+    "hadoop.username"="hadoop"
+);
+
+```
+
+#### S3
 
 ```sql
-CREATE CATALOG `paimon` PROPERTIES (
+  CREATE CATALOG `paimon_s3` PROPERTIES (
     "type" = "paimon",
-    "hive.metastore.uris" = "thrift://172.16.65.15:7004",
-    "dfs.ha.namenodes.HDFS1006531" = "nn2,nn1",
-    "dfs.namenode.rpc-address.HDFS1006531.nn2" = "172.16.65.115:4007",
-    "dfs.namenode.rpc-address.HDFS1006531.nn1" = "172.16.65.15:4007",
-    "dfs.nameservices" = "HDFS1006531",
-    "hadoop.username" = "hadoop",
-    "warehouse" = "hdfs://HDFS1006531/data/paimon",
-    "dfs.client.failover.proxy.provider.HDFS1006531" = 
"org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider"
+    "warehouse" = 
"s3://paimon-1308700295.cos.ap-beijing.myqcloud.com/paimoncos",
+    "s3.endpoint"="cos.ap-beijing.myqcloud.com",
+    "s3.access_key"="ak",
+    "s3.secret_key"="sk"
 );
+
+```
+
+#### OSS
+
+```sql
+   CREATE CATALOG `paimon_oss` PROPERTIES (

Review Comment:
   ```suggestion
   CREATE CATALOG `paimon_oss` PROPERTIES (
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org
For additional commands, e-mail: commits-h...@doris.apache.org

Reply via email to