This is an automated email from the ASF dual-hosted git repository.
morningman pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/doris-website.git
The following commit(s) were added to refs/heads/master by this push:
new fc0d22a511e [opt](dlf) opt oss and dlf doc (#2805)
fc0d22a511e is described below
commit fc0d22a511ebbb2903d4d0dd4f16566820994ee2
Author: Mingyu Chen (Rayner) <[email protected]>
AuthorDate: Wed Aug 27 17:33:11 2025 -0700
[opt](dlf) opt oss and dlf doc (#2805)
## Versions
- [x] dev
- [x] 3.0
- [x] 2.1
- [ ] 2.0
## Languages
- [x] Chinese
- [x] English
## Docs Checklist
- [ ] Checked by AI
- [ ] Test Cases Built
---
docs/lakehouse/metastores/aliyun-dlf.md | 4 +-
docs/lakehouse/storages/aliyun-oss.md | 59 +++++++++++++++++++++
.../current/lakehouse/metastores/aliyun-dlf.md | 4 +-
.../current/lakehouse/storages/aliyun-oss.md | 60 ++++++++++++++++++++++
.../version-2.1/lakehouse/metastores/aliyun-dlf.md | 4 +-
.../version-2.1/lakehouse/storages/aliyun-oss.md | 60 ++++++++++++++++++++++
.../version-3.0/lakehouse/metastores/aliyun-dlf.md | 4 +-
.../version-3.0/lakehouse/storages/aliyun-oss.md | 60 ++++++++++++++++++++++
.../version-2.1/lakehouse/metastores/aliyun-dlf.md | 4 +-
.../version-2.1/lakehouse/storages/aliyun-oss.md | 59 +++++++++++++++++++++
.../version-3.0/lakehouse/metastores/aliyun-dlf.md | 4 +-
.../version-3.0/lakehouse/storages/aliyun-oss.md | 59 +++++++++++++++++++++
12 files changed, 369 insertions(+), 12 deletions(-)
diff --git a/docs/lakehouse/metastores/aliyun-dlf.md
b/docs/lakehouse/metastores/aliyun-dlf.md
index 1daf3204bf4..4b23e883125 100644
--- a/docs/lakehouse/metastores/aliyun-dlf.md
+++ b/docs/lakehouse/metastores/aliyun-dlf.md
@@ -24,7 +24,7 @@ This document describes how to use the `CREATE CATALOG`
statement to connect and
| `dlf.catalog_id` | `dlf.catalog.id` | Catalog ID. Used to specify metadata
catalog. If not set, the default catalog is used. | None | No |
| `warehouse` | - | Storage path of the Warehouse, only required for Paimon
Catalog | None | No |
-### DLF Rest Catalog
+### DLF 2.5+ (Rest Catalog)
> Supported since version 3.1.0
@@ -72,7 +72,7 @@ CREATE CATALOG paimon_dlf PROPERTIES (
);
```
-### DLF Rest Catalog
+### DLF 2.5+ (Rest Catalog)
```sql
CREATE CATALOG paimon_dlf_test PROPERTIES (
diff --git a/docs/lakehouse/storages/aliyun-oss.md
b/docs/lakehouse/storages/aliyun-oss.md
index dc58f542ede..e7c677a03e8 100644
--- a/docs/lakehouse/storages/aliyun-oss.md
+++ b/docs/lakehouse/storages/aliyun-oss.md
@@ -54,3 +54,62 @@ For versions before 3.1:
* For versions before 3.1, please use the legacy name `s3.` as the prefix.
* Configuring `oss.region` can improve access accuracy and performance,
recommended to set.
* Connection pool parameters can be adjusted according to concurrency
requirements to avoid connection blocking.
+
+## OSS-HDFS
+
+OSS-HDFS service (JindoFS service) is an Alibaba Cloud native data lake
storage functionality. Based on unified metadata management capabilities, it is
compatible with HDFS file system interfaces and meets data lake computing
scenarios in big data and AI fields.
+
+Accessing data stored on OSS-HDFS is slightly different from directly
accessing OSS services. Please refer to this documentation for details.
+
+### Parameter Overview
+
+| Property Name | Legacy Name | Description
| Default Value | Required |
+| ------------------------------ | ---------------------------- |
------------------------------------------------------------ | ------------- |
-------- |
+| oss.hdfs.endpoint | s3.endpoint | Alibaba
Cloud OSS-HDFS service endpoint, e.g., `cn-hangzhou.oss-dls.aliyuncs.com`. |
None | Yes |
+| oss.hdfs.access_key | s3.access_key | OSS Access
Key for authentication | None | Yes |
+| oss.hdfs.secret_key | s3.secret_key | OSS Secret
Key, used together with Access Key | None | Yes |
+| oss.hdfs.region | s3.region | Region ID
where the OSS bucket is located, e.g., `cn-beijing`. | None | Yes
|
+| oss.hdfs.fs.defaultFS | | Supported in
version 3.1. Specifies the file system access path for OSS, e.g.,
`oss://my-bucket/`. | None | No |
+| oss.hdfs.hadoop.config.resources | | Supported in
version 3.1. Specifies the path containing OSS file system configuration.
Requires relative path. Default directory is `/plugins/hadoop_conf/` under the
(FE/BE) deployment directory (can be changed by modifying hadoop_config_dir in
fe.conf/be.conf). All FE and BE nodes need to configure the same relative path.
Example: `hadoop/conf/core-site.xml,hadoop/conf/hdfs-site.xml`. | None
| No |
+
+> For versions before 3.1, please use legacy names.
+
+### Endpoint Configuration
+
+`oss.hdfs.endpoint`: Used to specify the OSS-HDFS service endpoint.
+
+Endpoint is the entry address for accessing Alibaba Cloud OSS, formatted as
`<region>.oss-dls.aliyuncs.com`, e.g., `cn-hangzhou.oss-dls.aliyuncs.com`.
+
+We perform strict format validation to ensure the endpoint conforms to the
Alibaba Cloud OSS endpoint format.
+
+For backward compatibility, the endpoint configuration allows the inclusion of
https:// or http:// prefixes. The system will automatically parse and ignore
the protocol part during format validation.
+
+When using legacy names, the system determines whether it's an OSS-HDFS
service based on whether the `endpoint` contains `oss-dls`.
+
+### Configuration Files
+
+> Supported in version 3.1
+
+OSS-HDFS supports specifying HDFS-related configuration file directories
through the `oss.hdfs.hadoop.config.resources` parameter.
+
+The configuration file directory must contain `hdfs-site.xml` and
`core-site.xml` files. The default directory is `/plugins/hadoop_conf/` under
the (FE/BE) deployment directory. All FE and BE nodes need to configure the
same relative path.
+
+If the configuration files contain the parameters mentioned above in this
document, the explicitly configured parameters by the user take precedence.
Configuration files can specify multiple files, separated by commas, such as
`hadoop/conf/core-site.xml,hadoop/conf/hdfs-site.xml`.
+
+### Example Configuration
+
+```properties
+"oss.hdfs.access_key" = "your-access-key",
+"oss.hdfs.secret_key" = "your-secret-key",
+"oss.hdfs.endpoint" = "cn-hangzhou.oss-dls.aliyuncs.com",
+"oss.hdfs.region" = "cn-hangzhou"
+```
+
+For versions before 3.1:
+
+```
+"s3.access_key" = "your-access-key",
+"s3.secret_key" = "your-secret-key",
+"s3.endpoint" = "cn-hangzhou.oss-dls.aliyuncs.com",
+"s3.region" = "cn-hangzhou"
+```
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/metastores/aliyun-dlf.md
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/metastores/aliyun-dlf.md
index 93f0f31984e..50a197ae403 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/metastores/aliyun-dlf.md
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/metastores/aliyun-dlf.md
@@ -24,7 +24,7 @@
| `dlf.catalog_id` | `dlf.catalog.id` | Catalog ID。用于指定元数据目录,如果不设置则使用默认目录。 | 无
| 否 |
| `warehouse` | - | Warehouse 的存储路径,仅在 Paimon Catalog 中需要填写 | 无 | 否 |
-### DLF Rest Catalog
+### DLF 2.5+ (Rest Catalog)
> 自 3.1.0 版本支持
@@ -72,7 +72,7 @@ CREATE CATALOG paimon_dlf PROPERTIES (
);
```
-### DLF Rest Catalog
+### DLF 2.5+ (Rest Catalog)
```sql
CREATE CATALOG paimon_dlf_test PROPERTIES (
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/storages/aliyun-oss.md
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/storages/aliyun-oss.md
index 11c5dbb5da4..05c5f0e8aa7 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/storages/aliyun-oss.md
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/storages/aliyun-oss.md
@@ -54,3 +54,63 @@
* 3.1 之前的版本,请使用曾用名 `s3.` 作为前缀。
* 配置 `oss.region` 能提升访问的准确性和性能,建议设置。
* 连接池参数可根据并发需求调整,避免连接阻塞。
+
+## OSS-HDFS
+
+OSS-HDFS 服务(JindoFS 服务)是一个阿里云云原生数据湖存储功能。基于统一的元数据管理能力,兼容 HDFS 文件系统接口,满足大数据和 AI
等领域的数据湖计算场景。
+
+访问 OSS-HDFS 上存储的数据,和直接访问 OSS 服务稍有区别,详见本文档。
+
+### 参数总览
+
+| 属性名称 | 曾用名 | 描述
| 默认值 |是否必须 |
+| ------------------------------ | ---------------------------- |
------------------------------------------------------------ | ------ | --- |
+| oss.hdfs.endpoint | s3.endpoint | 阿里云
OSS-HDFS 服务的 Endpoint,例如 `cn-hangzhou.oss-dls.aliyuncs.com`。 | 无 | 是 |
+| oss.hdfs.access_key | s3.access_key | OSS
Access Key,用于身份验证 | 无 | 是 |
+| oss.hdfs.secret_key | s3.secret_key | OSS
Secret Key,与 Access Key 配合使用 | 无 | 是 |
+| oss.hdfs.region | s3.region | OSS
bucket 所在的地域 ID,例如 `cn-beijing`。 | 无 | 是 |
+| oss.hdfs.fs.defaultFS | | 3.1 版本支持。指定 OSS
的文件系统访问路径,例如 `oss://my-bucket/`。 | 无 | 否 |
+| oss.hdfs.hadoop.config.resources | | 3.1 版本支持。指定包含 OSS
文件系统配置的路径,需使用相对路径,默认目录为(FE/BE)部署目录下的 /plugins/hadoop_conf/(可修改 fe.conf/be.conf
中的 hadoop_config_dir 来更改默认路径)。所有 FE 和 BE
节点需配置相同相对路径。示例:`hadoop/conf/core-site.xml,hadoop/conf/hdfs-site.xml`。
| 无 | 否 |
+
+> 3.1 版本之前,请使用曾用名。
+
+### Endpoint 配置
+
+`oss.hdfs.endpoint`: 用于指定 OSS-HDFS 服务的 Endpoint。
+
+Endpoint 是访问阿里云 OSS 的入口地址,格式为 `<region>.oss-dls.aliyuncs.com`,例如
`cn-hangzhou.oss-dls.aliyuncs.com`。
+
+我们会对格式进行强校验,确保 Endpoint 符合阿里云 OSS Endpoint 格式。
+
+为保证向后兼容,Endpoint 配置项允许包含 https:// 或 http:// 前缀,系统在格式校验时会自动解析并忽略协议部分。
+
+如使用曾用名,则系统会根据 `endpoint` 中是否包含 `oss-dls` 判断是否是 OSS-HDFS 服务。
+
+### 配置文件
+
+> 3.1 版本支持
+
+OSS-HDFS 支持通过 `oss.hdfs.hadoop.config.resources` 参数来指定 HDFS 相关配置文件目录。
+
+配置文件目录需包含 `hdfs-site.xml` 和 `core-site.xml` 文件,默认目录为(FE/BE)部署目录下的
`/plugins/hadoop_conf/`。所有 FE
+和 BE 节点需配置相同的相对路径。
+
+如果配置文件包含文档上述参数,则优先使用用户显示配置的参数。配置文件可以指定多个文件,多个文件以逗号分隔。如
`hadoop/conf/core-site.xml,hadoop/conf/hdfs-site.xml`。
+
+### 示例配置
+
+```properties
+"oss.hdfs.access_key" = "your-access-key",
+"oss.hdfs.secret_key" = "your-secret-key",
+"oss.hdfs.endpoint" = "cn-hangzhou.oss-dls.aliyuncs.com",
+"oss.hdfs.region" = "cn-hangzhou"
+```
+
+3.1 之前的版:
+
+```
+"s3.access_key" = "your-access-key",
+"s3.secret_key" = "your-secret-key",
+"s3.endpoint" = "cn-hangzhou.oss-dls.aliyuncs.com",
+"s3.region" = "cn-hangzhou"
+```
\ No newline at end of file
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/lakehouse/metastores/aliyun-dlf.md
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/lakehouse/metastores/aliyun-dlf.md
index 93f0f31984e..50a197ae403 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/lakehouse/metastores/aliyun-dlf.md
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/lakehouse/metastores/aliyun-dlf.md
@@ -24,7 +24,7 @@
| `dlf.catalog_id` | `dlf.catalog.id` | Catalog ID。用于指定元数据目录,如果不设置则使用默认目录。 | 无
| 否 |
| `warehouse` | - | Warehouse 的存储路径,仅在 Paimon Catalog 中需要填写 | 无 | 否 |
-### DLF Rest Catalog
+### DLF 2.5+ (Rest Catalog)
> 自 3.1.0 版本支持
@@ -72,7 +72,7 @@ CREATE CATALOG paimon_dlf PROPERTIES (
);
```
-### DLF Rest Catalog
+### DLF 2.5+ (Rest Catalog)
```sql
CREATE CATALOG paimon_dlf_test PROPERTIES (
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/lakehouse/storages/aliyun-oss.md
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/lakehouse/storages/aliyun-oss.md
index 11c5dbb5da4..05c5f0e8aa7 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/lakehouse/storages/aliyun-oss.md
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/lakehouse/storages/aliyun-oss.md
@@ -54,3 +54,63 @@
* 3.1 之前的版本,请使用曾用名 `s3.` 作为前缀。
* 配置 `oss.region` 能提升访问的准确性和性能,建议设置。
* 连接池参数可根据并发需求调整,避免连接阻塞。
+
+## OSS-HDFS
+
+OSS-HDFS 服务(JindoFS 服务)是一个阿里云云原生数据湖存储功能。基于统一的元数据管理能力,兼容 HDFS 文件系统接口,满足大数据和 AI
等领域的数据湖计算场景。
+
+访问 OSS-HDFS 上存储的数据,和直接访问 OSS 服务稍有区别,详见本文档。
+
+### 参数总览
+
+| 属性名称 | 曾用名 | 描述
| 默认值 |是否必须 |
+| ------------------------------ | ---------------------------- |
------------------------------------------------------------ | ------ | --- |
+| oss.hdfs.endpoint | s3.endpoint | 阿里云
OSS-HDFS 服务的 Endpoint,例如 `cn-hangzhou.oss-dls.aliyuncs.com`。 | 无 | 是 |
+| oss.hdfs.access_key | s3.access_key | OSS
Access Key,用于身份验证 | 无 | 是 |
+| oss.hdfs.secret_key | s3.secret_key | OSS
Secret Key,与 Access Key 配合使用 | 无 | 是 |
+| oss.hdfs.region | s3.region | OSS
bucket 所在的地域 ID,例如 `cn-beijing`。 | 无 | 是 |
+| oss.hdfs.fs.defaultFS | | 3.1 版本支持。指定 OSS
的文件系统访问路径,例如 `oss://my-bucket/`。 | 无 | 否 |
+| oss.hdfs.hadoop.config.resources | | 3.1 版本支持。指定包含 OSS
文件系统配置的路径,需使用相对路径,默认目录为(FE/BE)部署目录下的 /plugins/hadoop_conf/(可修改 fe.conf/be.conf
中的 hadoop_config_dir 来更改默认路径)。所有 FE 和 BE
节点需配置相同相对路径。示例:`hadoop/conf/core-site.xml,hadoop/conf/hdfs-site.xml`。
| 无 | 否 |
+
+> 3.1 版本之前,请使用曾用名。
+
+### Endpoint 配置
+
+`oss.hdfs.endpoint`: 用于指定 OSS-HDFS 服务的 Endpoint。
+
+Endpoint 是访问阿里云 OSS 的入口地址,格式为 `<region>.oss-dls.aliyuncs.com`,例如
`cn-hangzhou.oss-dls.aliyuncs.com`。
+
+我们会对格式进行强校验,确保 Endpoint 符合阿里云 OSS Endpoint 格式。
+
+为保证向后兼容,Endpoint 配置项允许包含 https:// 或 http:// 前缀,系统在格式校验时会自动解析并忽略协议部分。
+
+如使用曾用名,则系统会根据 `endpoint` 中是否包含 `oss-dls` 判断是否是 OSS-HDFS 服务。
+
+### 配置文件
+
+> 3.1 版本支持
+
+OSS-HDFS 支持通过 `oss.hdfs.hadoop.config.resources` 参数来指定 HDFS 相关配置文件目录。
+
+配置文件目录需包含 `hdfs-site.xml` 和 `core-site.xml` 文件,默认目录为(FE/BE)部署目录下的
`/plugins/hadoop_conf/`。所有 FE
+和 BE 节点需配置相同的相对路径。
+
+如果配置文件包含文档上述参数,则优先使用用户显示配置的参数。配置文件可以指定多个文件,多个文件以逗号分隔。如
`hadoop/conf/core-site.xml,hadoop/conf/hdfs-site.xml`。
+
+### 示例配置
+
+```properties
+"oss.hdfs.access_key" = "your-access-key",
+"oss.hdfs.secret_key" = "your-secret-key",
+"oss.hdfs.endpoint" = "cn-hangzhou.oss-dls.aliyuncs.com",
+"oss.hdfs.region" = "cn-hangzhou"
+```
+
+3.1 之前的版:
+
+```
+"s3.access_key" = "your-access-key",
+"s3.secret_key" = "your-secret-key",
+"s3.endpoint" = "cn-hangzhou.oss-dls.aliyuncs.com",
+"s3.region" = "cn-hangzhou"
+```
\ No newline at end of file
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/lakehouse/metastores/aliyun-dlf.md
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/lakehouse/metastores/aliyun-dlf.md
index 93f0f31984e..50a197ae403 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/lakehouse/metastores/aliyun-dlf.md
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/lakehouse/metastores/aliyun-dlf.md
@@ -24,7 +24,7 @@
| `dlf.catalog_id` | `dlf.catalog.id` | Catalog ID。用于指定元数据目录,如果不设置则使用默认目录。 | 无
| 否 |
| `warehouse` | - | Warehouse 的存储路径,仅在 Paimon Catalog 中需要填写 | 无 | 否 |
-### DLF Rest Catalog
+### DLF 2.5+ (Rest Catalog)
> 自 3.1.0 版本支持
@@ -72,7 +72,7 @@ CREATE CATALOG paimon_dlf PROPERTIES (
);
```
-### DLF Rest Catalog
+### DLF 2.5+ (Rest Catalog)
```sql
CREATE CATALOG paimon_dlf_test PROPERTIES (
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/lakehouse/storages/aliyun-oss.md
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/lakehouse/storages/aliyun-oss.md
index 11c5dbb5da4..05c5f0e8aa7 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/lakehouse/storages/aliyun-oss.md
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/lakehouse/storages/aliyun-oss.md
@@ -54,3 +54,63 @@
* 3.1 之前的版本,请使用曾用名 `s3.` 作为前缀。
* 配置 `oss.region` 能提升访问的准确性和性能,建议设置。
* 连接池参数可根据并发需求调整,避免连接阻塞。
+
+## OSS-HDFS
+
+OSS-HDFS 服务(JindoFS 服务)是一个阿里云云原生数据湖存储功能。基于统一的元数据管理能力,兼容 HDFS 文件系统接口,满足大数据和 AI
等领域的数据湖计算场景。
+
+访问 OSS-HDFS 上存储的数据,和直接访问 OSS 服务稍有区别,详见本文档。
+
+### 参数总览
+
+| 属性名称 | 曾用名 | 描述
| 默认值 |是否必须 |
+| ------------------------------ | ---------------------------- |
------------------------------------------------------------ | ------ | --- |
+| oss.hdfs.endpoint | s3.endpoint | 阿里云
OSS-HDFS 服务的 Endpoint,例如 `cn-hangzhou.oss-dls.aliyuncs.com`。 | 无 | 是 |
+| oss.hdfs.access_key | s3.access_key | OSS
Access Key,用于身份验证 | 无 | 是 |
+| oss.hdfs.secret_key | s3.secret_key | OSS
Secret Key,与 Access Key 配合使用 | 无 | 是 |
+| oss.hdfs.region | s3.region | OSS
bucket 所在的地域 ID,例如 `cn-beijing`。 | 无 | 是 |
+| oss.hdfs.fs.defaultFS | | 3.1 版本支持。指定 OSS
的文件系统访问路径,例如 `oss://my-bucket/`。 | 无 | 否 |
+| oss.hdfs.hadoop.config.resources | | 3.1 版本支持。指定包含 OSS
文件系统配置的路径,需使用相对路径,默认目录为(FE/BE)部署目录下的 /plugins/hadoop_conf/(可修改 fe.conf/be.conf
中的 hadoop_config_dir 来更改默认路径)。所有 FE 和 BE
节点需配置相同相对路径。示例:`hadoop/conf/core-site.xml,hadoop/conf/hdfs-site.xml`。
| 无 | 否 |
+
+> 3.1 版本之前,请使用曾用名。
+
+### Endpoint 配置
+
+`oss.hdfs.endpoint`: 用于指定 OSS-HDFS 服务的 Endpoint。
+
+Endpoint 是访问阿里云 OSS 的入口地址,格式为 `<region>.oss-dls.aliyuncs.com`,例如
`cn-hangzhou.oss-dls.aliyuncs.com`。
+
+我们会对格式进行强校验,确保 Endpoint 符合阿里云 OSS Endpoint 格式。
+
+为保证向后兼容,Endpoint 配置项允许包含 https:// 或 http:// 前缀,系统在格式校验时会自动解析并忽略协议部分。
+
+如使用曾用名,则系统会根据 `endpoint` 中是否包含 `oss-dls` 判断是否是 OSS-HDFS 服务。
+
+### 配置文件
+
+> 3.1 版本支持
+
+OSS-HDFS 支持通过 `oss.hdfs.hadoop.config.resources` 参数来指定 HDFS 相关配置文件目录。
+
+配置文件目录需包含 `hdfs-site.xml` 和 `core-site.xml` 文件,默认目录为(FE/BE)部署目录下的
`/plugins/hadoop_conf/`。所有 FE
+和 BE 节点需配置相同的相对路径。
+
+如果配置文件包含文档上述参数,则优先使用用户显示配置的参数。配置文件可以指定多个文件,多个文件以逗号分隔。如
`hadoop/conf/core-site.xml,hadoop/conf/hdfs-site.xml`。
+
+### 示例配置
+
+```properties
+"oss.hdfs.access_key" = "your-access-key",
+"oss.hdfs.secret_key" = "your-secret-key",
+"oss.hdfs.endpoint" = "cn-hangzhou.oss-dls.aliyuncs.com",
+"oss.hdfs.region" = "cn-hangzhou"
+```
+
+3.1 之前的版:
+
+```
+"s3.access_key" = "your-access-key",
+"s3.secret_key" = "your-secret-key",
+"s3.endpoint" = "cn-hangzhou.oss-dls.aliyuncs.com",
+"s3.region" = "cn-hangzhou"
+```
\ No newline at end of file
diff --git a/versioned_docs/version-2.1/lakehouse/metastores/aliyun-dlf.md
b/versioned_docs/version-2.1/lakehouse/metastores/aliyun-dlf.md
index 1daf3204bf4..4b23e883125 100644
--- a/versioned_docs/version-2.1/lakehouse/metastores/aliyun-dlf.md
+++ b/versioned_docs/version-2.1/lakehouse/metastores/aliyun-dlf.md
@@ -24,7 +24,7 @@ This document describes how to use the `CREATE CATALOG`
statement to connect and
| `dlf.catalog_id` | `dlf.catalog.id` | Catalog ID. Used to specify metadata
catalog. If not set, the default catalog is used. | None | No |
| `warehouse` | - | Storage path of the Warehouse, only required for Paimon
Catalog | None | No |
-### DLF Rest Catalog
+### DLF 2.5+ (Rest Catalog)
> Supported since version 3.1.0
@@ -72,7 +72,7 @@ CREATE CATALOG paimon_dlf PROPERTIES (
);
```
-### DLF Rest Catalog
+### DLF 2.5+ (Rest Catalog)
```sql
CREATE CATALOG paimon_dlf_test PROPERTIES (
diff --git a/versioned_docs/version-2.1/lakehouse/storages/aliyun-oss.md
b/versioned_docs/version-2.1/lakehouse/storages/aliyun-oss.md
index dc58f542ede..e7c677a03e8 100644
--- a/versioned_docs/version-2.1/lakehouse/storages/aliyun-oss.md
+++ b/versioned_docs/version-2.1/lakehouse/storages/aliyun-oss.md
@@ -54,3 +54,62 @@ For versions before 3.1:
* For versions before 3.1, please use the legacy name `s3.` as the prefix.
* Configuring `oss.region` can improve access accuracy and performance,
recommended to set.
* Connection pool parameters can be adjusted according to concurrency
requirements to avoid connection blocking.
+
+## OSS-HDFS
+
+OSS-HDFS service (JindoFS service) is an Alibaba Cloud native data lake
storage functionality. Based on unified metadata management capabilities, it is
compatible with HDFS file system interfaces and meets data lake computing
scenarios in big data and AI fields.
+
+Accessing data stored on OSS-HDFS is slightly different from directly
accessing OSS services. Please refer to this documentation for details.
+
+### Parameter Overview
+
+| Property Name | Legacy Name | Description
| Default Value | Required |
+| ------------------------------ | ---------------------------- |
------------------------------------------------------------ | ------------- |
-------- |
+| oss.hdfs.endpoint | s3.endpoint | Alibaba
Cloud OSS-HDFS service endpoint, e.g., `cn-hangzhou.oss-dls.aliyuncs.com`. |
None | Yes |
+| oss.hdfs.access_key | s3.access_key | OSS Access
Key for authentication | None | Yes |
+| oss.hdfs.secret_key | s3.secret_key | OSS Secret
Key, used together with Access Key | None | Yes |
+| oss.hdfs.region | s3.region | Region ID
where the OSS bucket is located, e.g., `cn-beijing`. | None | Yes
|
+| oss.hdfs.fs.defaultFS | | Supported in
version 3.1. Specifies the file system access path for OSS, e.g.,
`oss://my-bucket/`. | None | No |
+| oss.hdfs.hadoop.config.resources | | Supported in
version 3.1. Specifies the path containing OSS file system configuration.
Requires relative path. Default directory is `/plugins/hadoop_conf/` under the
(FE/BE) deployment directory (can be changed by modifying hadoop_config_dir in
fe.conf/be.conf). All FE and BE nodes need to configure the same relative path.
Example: `hadoop/conf/core-site.xml,hadoop/conf/hdfs-site.xml`. | None
| No |
+
+> For versions before 3.1, please use legacy names.
+
+### Endpoint Configuration
+
+`oss.hdfs.endpoint`: Used to specify the OSS-HDFS service endpoint.
+
+Endpoint is the entry address for accessing Alibaba Cloud OSS, formatted as
`<region>.oss-dls.aliyuncs.com`, e.g., `cn-hangzhou.oss-dls.aliyuncs.com`.
+
+We perform strict format validation to ensure the endpoint conforms to the
Alibaba Cloud OSS endpoint format.
+
+For backward compatibility, the endpoint configuration allows the inclusion of
https:// or http:// prefixes. The system will automatically parse and ignore
the protocol part during format validation.
+
+When using legacy names, the system determines whether it's an OSS-HDFS
service based on whether the `endpoint` contains `oss-dls`.
+
+### Configuration Files
+
+> Supported in version 3.1
+
+OSS-HDFS supports specifying HDFS-related configuration file directories
through the `oss.hdfs.hadoop.config.resources` parameter.
+
+The configuration file directory must contain `hdfs-site.xml` and
`core-site.xml` files. The default directory is `/plugins/hadoop_conf/` under
the (FE/BE) deployment directory. All FE and BE nodes need to configure the
same relative path.
+
+If the configuration files contain the parameters mentioned above in this
document, the explicitly configured parameters by the user take precedence.
Configuration files can specify multiple files, separated by commas, such as
`hadoop/conf/core-site.xml,hadoop/conf/hdfs-site.xml`.
+
+### Example Configuration
+
+```properties
+"oss.hdfs.access_key" = "your-access-key",
+"oss.hdfs.secret_key" = "your-secret-key",
+"oss.hdfs.endpoint" = "cn-hangzhou.oss-dls.aliyuncs.com",
+"oss.hdfs.region" = "cn-hangzhou"
+```
+
+For versions before 3.1:
+
+```
+"s3.access_key" = "your-access-key",
+"s3.secret_key" = "your-secret-key",
+"s3.endpoint" = "cn-hangzhou.oss-dls.aliyuncs.com",
+"s3.region" = "cn-hangzhou"
+```
diff --git a/versioned_docs/version-3.0/lakehouse/metastores/aliyun-dlf.md
b/versioned_docs/version-3.0/lakehouse/metastores/aliyun-dlf.md
index 1daf3204bf4..4b23e883125 100644
--- a/versioned_docs/version-3.0/lakehouse/metastores/aliyun-dlf.md
+++ b/versioned_docs/version-3.0/lakehouse/metastores/aliyun-dlf.md
@@ -24,7 +24,7 @@ This document describes how to use the `CREATE CATALOG`
statement to connect and
| `dlf.catalog_id` | `dlf.catalog.id` | Catalog ID. Used to specify metadata
catalog. If not set, the default catalog is used. | None | No |
| `warehouse` | - | Storage path of the Warehouse, only required for Paimon
Catalog | None | No |
-### DLF Rest Catalog
+### DLF 2.5+ (Rest Catalog)
> Supported since version 3.1.0
@@ -72,7 +72,7 @@ CREATE CATALOG paimon_dlf PROPERTIES (
);
```
-### DLF Rest Catalog
+### DLF 2.5+ (Rest Catalog)
```sql
CREATE CATALOG paimon_dlf_test PROPERTIES (
diff --git a/versioned_docs/version-3.0/lakehouse/storages/aliyun-oss.md
b/versioned_docs/version-3.0/lakehouse/storages/aliyun-oss.md
index dc58f542ede..e7c677a03e8 100644
--- a/versioned_docs/version-3.0/lakehouse/storages/aliyun-oss.md
+++ b/versioned_docs/version-3.0/lakehouse/storages/aliyun-oss.md
@@ -54,3 +54,62 @@ For versions before 3.1:
* For versions before 3.1, please use the legacy name `s3.` as the prefix.
* Configuring `oss.region` can improve access accuracy and performance,
recommended to set.
* Connection pool parameters can be adjusted according to concurrency
requirements to avoid connection blocking.
+
+## OSS-HDFS
+
+OSS-HDFS service (JindoFS service) is an Alibaba Cloud native data lake
storage functionality. Based on unified metadata management capabilities, it is
compatible with HDFS file system interfaces and meets data lake computing
scenarios in big data and AI fields.
+
+Accessing data stored on OSS-HDFS is slightly different from directly
accessing OSS services. Please refer to this documentation for details.
+
+### Parameter Overview
+
+| Property Name | Legacy Name | Description
| Default Value | Required |
+| ------------------------------ | ---------------------------- |
------------------------------------------------------------ | ------------- |
-------- |
+| oss.hdfs.endpoint | s3.endpoint | Alibaba
Cloud OSS-HDFS service endpoint, e.g., `cn-hangzhou.oss-dls.aliyuncs.com`. |
None | Yes |
+| oss.hdfs.access_key | s3.access_key | OSS Access
Key for authentication | None | Yes |
+| oss.hdfs.secret_key | s3.secret_key | OSS Secret
Key, used together with Access Key | None | Yes |
+| oss.hdfs.region | s3.region | Region ID
where the OSS bucket is located, e.g., `cn-beijing`. | None | Yes
|
+| oss.hdfs.fs.defaultFS | | Supported in
version 3.1. Specifies the file system access path for OSS, e.g.,
`oss://my-bucket/`. | None | No |
+| oss.hdfs.hadoop.config.resources | | Supported in
version 3.1. Specifies the path containing OSS file system configuration.
Requires relative path. Default directory is `/plugins/hadoop_conf/` under the
(FE/BE) deployment directory (can be changed by modifying hadoop_config_dir in
fe.conf/be.conf). All FE and BE nodes need to configure the same relative path.
Example: `hadoop/conf/core-site.xml,hadoop/conf/hdfs-site.xml`. | None
| No |
+
+> For versions before 3.1, please use legacy names.
+
+### Endpoint Configuration
+
+`oss.hdfs.endpoint`: Used to specify the OSS-HDFS service endpoint.
+
+Endpoint is the entry address for accessing Alibaba Cloud OSS, formatted as
`<region>.oss-dls.aliyuncs.com`, e.g., `cn-hangzhou.oss-dls.aliyuncs.com`.
+
+We perform strict format validation to ensure the endpoint conforms to the
Alibaba Cloud OSS endpoint format.
+
+For backward compatibility, the endpoint configuration allows the inclusion of
https:// or http:// prefixes. The system will automatically parse and ignore
the protocol part during format validation.
+
+When using legacy names, the system determines whether it's an OSS-HDFS
service based on whether the `endpoint` contains `oss-dls`.
+
+### Configuration Files
+
+> Supported in version 3.1
+
+OSS-HDFS supports specifying HDFS-related configuration file directories
through the `oss.hdfs.hadoop.config.resources` parameter.
+
+The configuration file directory must contain `hdfs-site.xml` and
`core-site.xml` files. The default directory is `/plugins/hadoop_conf/` under
the (FE/BE) deployment directory. All FE and BE nodes need to configure the
same relative path.
+
+If the configuration files contain the parameters mentioned above in this
document, the explicitly configured parameters by the user take precedence.
Configuration files can specify multiple files, separated by commas, such as
`hadoop/conf/core-site.xml,hadoop/conf/hdfs-site.xml`.
+
+### Example Configuration
+
+```properties
+"oss.hdfs.access_key" = "your-access-key",
+"oss.hdfs.secret_key" = "your-secret-key",
+"oss.hdfs.endpoint" = "cn-hangzhou.oss-dls.aliyuncs.com",
+"oss.hdfs.region" = "cn-hangzhou"
+```
+
+For versions before 3.1:
+
+```
+"s3.access_key" = "your-access-key",
+"s3.secret_key" = "your-secret-key",
+"s3.endpoint" = "cn-hangzhou.oss-dls.aliyuncs.com",
+"s3.region" = "cn-hangzhou"
+```
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]