This is an automated email from the ASF dual-hosted git repository.
zykkk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/doris.git
The following commit(s) were added to refs/heads/master by this push:
new 447d5356e02 [feat](catalog) Support catalog attribute connectivity
tester (#57004)
447d5356e02 is described below
commit 447d5356e0223907cc33b8fcb0c04af6ec8697a5
Author: zy-kkk <[email protected]>
AuthorDate: Thu Oct 30 16:09:44 2025 +0800
[feat](catalog) Support catalog attribute connectivity tester (#57004)
## ๐ฏ PR Summary
This PR implements a **catalog connectivity testing framework** that
validates both metadata and storage connectivity before catalog
creation, helping users identify configuration issues early.
## ๐ Implementation Status
### Metadata Connectivity Testers
| Catalog Type | Implementation Status | Notes |
|-------------|----------------------|-------|
| **Hive HMS** | โ
Fully Implemented | Tests HMS connection using
`getAllDatabases()` API |
| **Iceberg HMS** | โ
Fully Implemented | Delegates to HMS tester +
validates warehouse |
| **Iceberg REST** | โ
Fully Implemented | Tests REST API and retrieves
warehouse location |
| **AWS Glue (Hive)** | โ
Fully Implemented | Implement using AWS Glue
`getDatabases()` API |
| **AWS Glue (Iceberg)** | โ
Fully Implemented | Implement using AWS
Glue `getDatabases()` API |
| **Iceberg S3 Table(Glue Rest)** | โ
Fully Implemented | same as
iceberg rest |
### Storage Connectivity Testers
| Storage Type | Implementation Status | Notes |
|-------------|----------------------|-------|
| **S3** | โ
Fully Implemented | Tests S3 connectivity using
`headBucket()` operation |
| **MinIO** | โ
Fully Implemented | S3-compatible implementation |
| **HDFS** | โ ๏ธ No-op Implementation | Skips test (always returns
success) - TODO for future |
## ๐๏ธ Architecture
### Core Design Patterns
1. **Template Method Pattern**: Parent classes handle validation logic,
subclasses implement specific tests
- `AbstractS3CompatibleConnectivityTester` enforces validation before
testing
- `doTestFeConnection()` hook method for subclass implementation
2. **Separate Meta & Storage Testing**:
- **Meta Tester**: Validates catalog metadata service (HMS, REST, Glue)
- **Storage Tester**: Validates underlying storage (S3, HDFS, MinIO)
3. **Smart Validation**:
- Skips tests when credentials are missing (IAM role scenarios)
- Validates warehouse location schema before testing
- Only tests compatible storage types (e.g., S3 tester requires `s3://`
or `s3a://` URIs)
## ๐งช Testing
### Test Coverage
| Test Suite | Location | Coverage |
|------------|----------|----------|
| **Iceberg REST + MinIO** |
`external_table_p0/iceberg/test_iceberg_rest_minio_connectivity.groovy`
| Meta failure, Storage failure, Success |
| **Hive/Iceberg + S3** |
`external_table_p2/test_connection/test_connectivity.groovy` | HMS+S3,
Iceberg HMS+S3, Iceberg REST+S3 |
### Test Scenarios
Each test validates:
- โ
Metadata connectivity failure (invalid HMS/REST URI)
- โ
Storage connectivity failure (invalid credentials)
- โ
Successful connectivity (valid configuration)
## ๐ Usage
Enable connectivity testing by adding `'test_connection' = 'true'` to
catalog properties:
```sql
CREATE CATALOG iceberg_catalog PROPERTIES (
'type' = 'iceberg',
'iceberg.catalog.type' = 'rest',
'iceberg.rest.uri' = 'http://localhost:8181',
'warehouse' = 's3a://my-bucket/warehouse',
's3.access_key' = 'AKIAXXXXX',
's3.secret_key' = 'secret',
's3.endpoint' = 'https://s3.amazonaws.com',
's3.region' = 'us-east-1',
'test_connection' = 'true' -- Enable connectivity test
);
```
Error messages include specific tester types for easier debugging:
- `HMS connectivity test failed: ...`
- `Iceberg REST connectivity test failed: ...`
- `S3 connectivity test failed: ...`
- `Minio connectivity test failed: ...`
## ๐ Future Work
1. **HDFS Testing**: Implement actual HDFS connectivity validation
---
.../io/fs/connectivity/s3_connectivity_tester.cpp | 53 ++++
be/src/io/fs/connectivity/s3_connectivity_tester.h | 34 +++
.../connectivity/storage_connectivity_tester.cpp | 37 +++
.../fs/connectivity/storage_connectivity_tester.h | 35 +++
be/src/service/backend_service.cpp | 8 +
be/src/service/backend_service.h | 3 +
.../apache/doris/datasource/ExternalCatalog.java | 15 +
.../AWSGlueMetaStoreBaseConnectivityTester.java | 70 +++++
.../AbstractHiveConnectivityTester.java | 32 +++
.../AbstractIcebergConnectivityTester.java | 36 +++
.../AbstractS3CompatibleConnectivityTester.java | 69 +++++
.../CatalogConnectivityTestCoordinator.java | 320 +++++++++++++++++++++
.../connectivity/HMSBaseConnectivityTester.java | 62 ++++
.../HdfsCompatibleConnectivityTester.java | 51 ++++
.../connectivity/HdfsConnectivityTester.java | 26 ++
.../HiveGlueMetaStoreConnectivityTester.java | 41 +++
.../connectivity/HiveHMSConnectivityTester.java | 40 +++
.../IcebergGlueMetaStoreConnectivityTester.java | 41 +++
.../connectivity/IcebergHMSConnectivityTester.java | 40 +++
.../IcebergRestConnectivityTester.java | 85 ++++++
...IcebergS3TablesMetaStoreConnectivityTester.java | 32 +++
.../connectivity/MetaConnectivityTester.java | 37 +++
.../connectivity/MinioConnectivityTester.java | 32 +++
.../connectivity/S3ConnectivityTester.java | 32 +++
.../connectivity/StorageConnectivityTester.java | 99 +++++++
.../metastore/AWSGlueMetaStoreBaseProperties.java | 65 +++++
.../metastore/AbstractIcebergProperties.java | 1 +
.../metastore/HiveGlueMetaStoreProperties.java | 2 +
.../property/metastore/HiveHMSProperties.java | 3 +-
.../metastore/IcebergGlueMetaStoreProperties.java | 2 +
.../metastore/IcebergHMSMetaStoreProperties.java | 2 +
.../property/metastore/IcebergRestProperties.java | 2 +
.../property/storage/HdfsProperties.java | 17 ++
.../datasource/property/storage/S3Properties.java | 1 +
.../property/storage/StorageProperties.java | 3 +-
.../org/apache/doris/common/GenericPoolTest.java | 6 +
.../apache/doris/utframe/MockedBackendFactory.java | 9 +
gensrc/thrift/BackendService.thrift | 12 +
.../test_iceberg_rest_minio_connectivity.groovy | 105 +++++++
.../test_connection/test_connectivity.groovy | 207 +++++++++++++
40 files changed, 1765 insertions(+), 2 deletions(-)
diff --git a/be/src/io/fs/connectivity/s3_connectivity_tester.cpp
b/be/src/io/fs/connectivity/s3_connectivity_tester.cpp
new file mode 100644
index 00000000000..661e251b3eb
--- /dev/null
+++ b/be/src/io/fs/connectivity/s3_connectivity_tester.cpp
@@ -0,0 +1,53 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements. See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership. The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License. You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied. See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+#include "io/fs/connectivity/s3_connectivity_tester.h"
+
+#include "io/fs/s3_file_system.h"
+#include "util/s3_uri.h"
+
+namespace doris::io {
+#include "common/compile_check_begin.h"
+
+Status S3ConnectivityTester::test(const std::map<std::string, std::string>&
properties) {
+ auto it = properties.find(TEST_LOCATION);
+ S3URI s3_uri(it->second);
+ RETURN_IF_ERROR(s3_uri.parse());
+
+ std::string bucket = s3_uri.get_bucket();
+ if (bucket.empty()) {
+ return Status::InvalidArgument("Failed to extract bucket from
location: {}", it->second);
+ }
+
+ S3Conf s3_conf;
+ RETURN_IF_ERROR(S3ClientFactory::convert_properties_to_s3_conf(properties,
s3_uri, &s3_conf));
+
+ auto obj_client = S3ClientFactory::instance().create(s3_conf.client_conf);
+ if (!obj_client) {
+ return Status::InternalError("Failed to create S3 client");
+ }
+
+ auto resp = obj_client->head_object({.bucket = bucket, .key = ""});
+ if (resp.resp.status.code != ErrorCode::OK && resp.resp.status.code !=
ErrorCode::NOT_FOUND) {
+ return Status::IOError("S3 connectivity test failed for bucket '{}':
{}", bucket,
+ resp.resp.status.msg);
+ }
+
+ return Status::OK();
+}
+#include "common/compile_check_end.h"
+} // namespace doris::io
diff --git a/be/src/io/fs/connectivity/s3_connectivity_tester.h
b/be/src/io/fs/connectivity/s3_connectivity_tester.h
new file mode 100644
index 00000000000..010f5c22440
--- /dev/null
+++ b/be/src/io/fs/connectivity/s3_connectivity_tester.h
@@ -0,0 +1,34 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements. See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership. The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License. You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied. See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+#pragma once
+
+#include <map>
+#include <string>
+
+#include "common/status.h"
+
+namespace doris::io {
+
+class S3ConnectivityTester {
+public:
+ static constexpr const char* TEST_LOCATION = "test_location";
+
+ static Status test(const std::map<std::string, std::string>& properties);
+};
+
+} // namespace doris::io
diff --git a/be/src/io/fs/connectivity/storage_connectivity_tester.cpp
b/be/src/io/fs/connectivity/storage_connectivity_tester.cpp
new file mode 100644
index 00000000000..8ccf9b1ce02
--- /dev/null
+++ b/be/src/io/fs/connectivity/storage_connectivity_tester.cpp
@@ -0,0 +1,37 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements. See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership. The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License. You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied. See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+#include "io/fs/connectivity/storage_connectivity_tester.h"
+
+#include "io/fs/connectivity/s3_connectivity_tester.h"
+
+namespace doris::io {
+#include "common/compile_check_begin.h"
+
+Status StorageConnectivityTester::test(TStorageBackendType::type type,
+ const std::map<std::string,
std::string>& properties) {
+ switch (type) {
+ case TStorageBackendType::S3:
+ return S3ConnectivityTester::test(properties);
+ case TStorageBackendType::HDFS:
+ case TStorageBackendType::AZURE:
+ default:
+ return Status::OK();
+ }
+}
+#include "common/compile_check_end.h"
+} // namespace doris::io
diff --git a/be/src/io/fs/connectivity/storage_connectivity_tester.h
b/be/src/io/fs/connectivity/storage_connectivity_tester.h
new file mode 100644
index 00000000000..3cec34475cc
--- /dev/null
+++ b/be/src/io/fs/connectivity/storage_connectivity_tester.h
@@ -0,0 +1,35 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements. See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership. The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License. You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied. See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+#pragma once
+
+#include <gen_cpp/Types_types.h>
+
+#include <map>
+#include <string>
+
+#include "common/status.h"
+
+namespace doris::io {
+
+class StorageConnectivityTester {
+public:
+ static Status test(TStorageBackendType::type type,
+ const std::map<std::string, std::string>& properties);
+};
+
+} // namespace doris::io
diff --git a/be/src/service/backend_service.cpp
b/be/src/service/backend_service.cpp
index 12c091d2c69..da87a73ee2c 100644
--- a/be/src/service/backend_service.cpp
+++ b/be/src/service/backend_service.cpp
@@ -50,6 +50,7 @@
#include "common/logging.h"
#include "common/status.h"
#include "http/http_client.h"
+#include "io/fs/connectivity/storage_connectivity_tester.h"
#include "io/fs/local_file_system.h"
#include "olap/olap_common.h"
#include "olap/olap_define.h"
@@ -1290,5 +1291,12 @@ void
BaseBackendService::get_dictionary_status(TDictionaryStatusList& result,
LOG(INFO) << "query for dictionary status, return " <<
result.dictionary_status_list.size()
<< " rows";
}
+
+void
BaseBackendService::test_storage_connectivity(TTestStorageConnectivityResponse&
response,
+ const
TTestStorageConnectivityRequest& request) {
+ Status status = io::StorageConnectivityTester::test(request.type,
request.properties);
+ response.__set_status(status.to_thrift());
+}
+
#include "common/compile_check_end.h"
} // namespace doris
diff --git a/be/src/service/backend_service.h b/be/src/service/backend_service.h
index 75aff2d34eb..6c43f69170d 100644
--- a/be/src/service/backend_service.h
+++ b/be/src/service/backend_service.h
@@ -119,6 +119,9 @@ public:
void get_dictionary_status(TDictionaryStatusList& result,
const std::vector<int64_t>& dictionary_id)
override;
+ void test_storage_connectivity(TTestStorageConnectivityResponse& response,
+ const TTestStorageConnectivityRequest&
request) override;
+
////////////////////////////////////////////////////////////////////////////
// begin cloud backend functions
////////////////////////////////////////////////////////////////////////////
diff --git
a/fe/fe-core/src/main/java/org/apache/doris/datasource/ExternalCatalog.java
b/fe/fe-core/src/main/java/org/apache/doris/datasource/ExternalCatalog.java
index 1c9bb438ddf..ad92cd4070a 100644
--- a/fe/fe-core/src/main/java/org/apache/doris/datasource/ExternalCatalog.java
+++ b/fe/fe-core/src/main/java/org/apache/doris/datasource/ExternalCatalog.java
@@ -36,6 +36,7 @@ import org.apache.doris.common.Version;
import org.apache.doris.common.security.authentication.ExecutionAuthenticator;
import org.apache.doris.common.util.Util;
import org.apache.doris.datasource.ExternalSchemaCache.SchemaCacheKey;
+import
org.apache.doris.datasource.connectivity.CatalogConnectivityTestCoordinator;
import org.apache.doris.datasource.es.EsExternalDatabase;
import org.apache.doris.datasource.hive.HMSExternalCatalog;
import org.apache.doris.datasource.hive.HMSExternalDatabase;
@@ -128,6 +129,9 @@ public abstract class ExternalCatalog
protected static final int ICEBERG_CATALOG_EXECUTOR_THREAD_NUM =
Runtime.getRuntime().availableProcessors();
+ public static final String TEST_CONNECTION = "test_connection";
+ public static final boolean DEFAULT_TEST_CONNECTION = false;
+
// Unique id of this catalog, will be assigned after catalog is loaded.
@SerializedName(value = "id")
protected long id;
@@ -247,6 +251,17 @@ public abstract class ExternalCatalog
// Will be called when creating catalog(not replaying).
// Subclass can override this method to do some check when creating
catalog.
public void checkWhenCreating() throws DdlException {
+ boolean testConnection = Boolean.parseBoolean(
+ catalogProperty.getOrDefault(TEST_CONNECTION,
String.valueOf(DEFAULT_TEST_CONNECTION)));
+
+ if (testConnection) {
+ CatalogConnectivityTestCoordinator testCoordinator = new
CatalogConnectivityTestCoordinator(
+ name,
+ catalogProperty.getMetastoreProperties(),
+ catalogProperty.getStoragePropertiesMap()
+ );
+ testCoordinator.runTests();
+ }
}
/**
diff --git
a/fe/fe-core/src/main/java/org/apache/doris/datasource/connectivity/AWSGlueMetaStoreBaseConnectivityTester.java
b/fe/fe-core/src/main/java/org/apache/doris/datasource/connectivity/AWSGlueMetaStoreBaseConnectivityTester.java
new file mode 100644
index 00000000000..fc599c2e2bc
--- /dev/null
+++
b/fe/fe-core/src/main/java/org/apache/doris/datasource/connectivity/AWSGlueMetaStoreBaseConnectivityTester.java
@@ -0,0 +1,70 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements. See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership. The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License. You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied. See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.doris.datasource.connectivity;
+
+import
org.apache.doris.datasource.property.metastore.AWSGlueMetaStoreBaseProperties;
+
+import org.apache.commons.lang3.StringUtils;
+import software.amazon.awssdk.regions.Region;
+import software.amazon.awssdk.services.glue.GlueClient;
+import software.amazon.awssdk.services.glue.GlueClientBuilder;
+import software.amazon.awssdk.services.glue.model.GetDatabasesRequest;
+
+import java.net.URI;
+
+public class AWSGlueMetaStoreBaseConnectivityTester implements
MetaConnectivityTester {
+ private final AWSGlueMetaStoreBaseProperties properties;
+
+ public
AWSGlueMetaStoreBaseConnectivityTester(AWSGlueMetaStoreBaseProperties
properties) {
+ this.properties = properties;
+ }
+
+ @Override
+ public String getTestType() {
+ return "AWS Glue";
+ }
+
+ @Override
+ public void testConnection() throws Exception {
+ GlueClientBuilder clientBuilder = GlueClient.builder();
+
+ String glueRegion = properties.getGlueRegion();
+ String glueEndpoint = properties.getGlueEndpoint();
+
+ // Set region
+ if (StringUtils.isNotBlank(glueRegion)) {
+ clientBuilder.region(Region.of(glueRegion));
+ }
+
+ // Set endpoint if specified
+ if (StringUtils.isNotBlank(glueEndpoint)) {
+ clientBuilder.endpointOverride(URI.create(glueEndpoint));
+ }
+
+ // Set credentials using properties method
+
clientBuilder.credentialsProvider(properties.getAwsCredentialsProvider());
+
+ // Test connection by listing databases (lightweight operation)
+ try (GlueClient glueClient = clientBuilder.build()) {
+ GetDatabasesRequest request = GetDatabasesRequest.builder()
+ .maxResults(1)
+ .build();
+ glueClient.getDatabases(request);
+ }
+ }
+}
diff --git
a/fe/fe-core/src/main/java/org/apache/doris/datasource/connectivity/AbstractHiveConnectivityTester.java
b/fe/fe-core/src/main/java/org/apache/doris/datasource/connectivity/AbstractHiveConnectivityTester.java
new file mode 100644
index 00000000000..468730b5b9b
--- /dev/null
+++
b/fe/fe-core/src/main/java/org/apache/doris/datasource/connectivity/AbstractHiveConnectivityTester.java
@@ -0,0 +1,32 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements. See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership. The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License. You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied. See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.doris.datasource.connectivity;
+
+import org.apache.doris.datasource.property.metastore.AbstractHiveProperties;
+
+public abstract class AbstractHiveConnectivityTester implements
MetaConnectivityTester {
+ protected final AbstractHiveProperties properties;
+
+ protected AbstractHiveConnectivityTester(AbstractHiveProperties
properties) {
+ this.properties = properties;
+ }
+
+ @Override
+ public abstract void testConnection() throws Exception;
+
+}
diff --git
a/fe/fe-core/src/main/java/org/apache/doris/datasource/connectivity/AbstractIcebergConnectivityTester.java
b/fe/fe-core/src/main/java/org/apache/doris/datasource/connectivity/AbstractIcebergConnectivityTester.java
new file mode 100644
index 00000000000..9fd5077c82b
--- /dev/null
+++
b/fe/fe-core/src/main/java/org/apache/doris/datasource/connectivity/AbstractIcebergConnectivityTester.java
@@ -0,0 +1,36 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements. See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership. The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License. You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied. See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.doris.datasource.connectivity;
+
+import
org.apache.doris.datasource.property.metastore.AbstractIcebergProperties;
+
+public abstract class AbstractIcebergConnectivityTester implements
MetaConnectivityTester {
+ protected final AbstractIcebergProperties properties;
+
+ protected AbstractIcebergConnectivityTester(AbstractIcebergProperties
properties) {
+ this.properties = properties;
+ }
+
+ @Override
+ public abstract void testConnection() throws Exception;
+
+ @Override
+ public String getTestLocation() {
+ return properties.getWarehouse();
+ }
+}
diff --git
a/fe/fe-core/src/main/java/org/apache/doris/datasource/connectivity/AbstractS3CompatibleConnectivityTester.java
b/fe/fe-core/src/main/java/org/apache/doris/datasource/connectivity/AbstractS3CompatibleConnectivityTester.java
new file mode 100644
index 00000000000..23a5d407dcd
--- /dev/null
+++
b/fe/fe-core/src/main/java/org/apache/doris/datasource/connectivity/AbstractS3CompatibleConnectivityTester.java
@@ -0,0 +1,69 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements. See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership. The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License. You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied. See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.doris.datasource.connectivity;
+
+import org.apache.doris.common.util.S3URI;
+import org.apache.doris.common.util.S3Util;
+import
org.apache.doris.datasource.property.storage.AbstractS3CompatibleProperties;
+import org.apache.doris.thrift.TStorageBackendType;
+
+import software.amazon.awssdk.services.s3.S3Client;
+
+import java.net.URI;
+import java.util.HashMap;
+import java.util.Map;
+
+public abstract class AbstractS3CompatibleConnectivityTester implements
StorageConnectivityTester {
+ private static final String TEST_LOCATION = "test_location";
+ protected final AbstractS3CompatibleProperties properties;
+ protected final String testLocation;
+
+ public
AbstractS3CompatibleConnectivityTester(AbstractS3CompatibleProperties
properties, String testLocation) {
+ this.properties = properties;
+ this.testLocation = testLocation;
+ }
+
+ @Override
+ public TStorageBackendType getStorageType() {
+ return TStorageBackendType.S3;
+ }
+
+ @Override
+ public Map<String, String> getBackendProperties() {
+ Map<String, String> props = new
HashMap<>(properties.getBackendConfigProperties());
+ props.put(TEST_LOCATION, testLocation);
+ return props;
+ }
+
+ @Override
+ public void testFeConnection() throws Exception {
+ S3URI s3Uri = S3URI.create(testLocation,
+ Boolean.parseBoolean(properties.getUsePathStyle()),
+
Boolean.parseBoolean(properties.getForceParsingByStandardUrl()));
+
+ String endpoint = properties.getEndpoint();
+
+ try (S3Client client = S3Util.buildS3Client(
+ URI.create(endpoint),
+ properties.getRegion(),
+ Boolean.parseBoolean(properties.getUsePathStyle()),
+ properties.getAwsCredentialsProvider())) {
+ client.headBucket(b -> b.bucket(s3Uri.getBucket()));
+ }
+ }
+}
diff --git
a/fe/fe-core/src/main/java/org/apache/doris/datasource/connectivity/CatalogConnectivityTestCoordinator.java
b/fe/fe-core/src/main/java/org/apache/doris/datasource/connectivity/CatalogConnectivityTestCoordinator.java
new file mode 100644
index 00000000000..35b3ba744c3
--- /dev/null
+++
b/fe/fe-core/src/main/java/org/apache/doris/datasource/connectivity/CatalogConnectivityTestCoordinator.java
@@ -0,0 +1,320 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements. See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership. The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License. You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied. See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.doris.datasource.connectivity;
+
+import org.apache.doris.common.DdlException;
+import
org.apache.doris.datasource.property.metastore.HiveGlueMetaStoreProperties;
+import org.apache.doris.datasource.property.metastore.HiveHMSProperties;
+import
org.apache.doris.datasource.property.metastore.IcebergGlueMetaStoreProperties;
+import
org.apache.doris.datasource.property.metastore.IcebergHMSMetaStoreProperties;
+import org.apache.doris.datasource.property.metastore.IcebergRestProperties;
+import
org.apache.doris.datasource.property.metastore.IcebergS3TablesMetaStoreProperties;
+import org.apache.doris.datasource.property.metastore.MetastoreProperties;
+import org.apache.doris.datasource.property.storage.HdfsProperties;
+import org.apache.doris.datasource.property.storage.MinioProperties;
+import org.apache.doris.datasource.property.storage.S3Properties;
+import org.apache.doris.datasource.property.storage.StorageProperties;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+
+import java.util.Map;
+
+/**
+ * Coordinator for catalog connectivity testing.
+ * This class orchestrates the testing of metadata services and storage systems
+ * when creating external catalogs with test_connection=true.
+ */
+public class CatalogConnectivityTestCoordinator {
+ private static final Logger LOG =
LogManager.getLogger(CatalogConnectivityTestCoordinator.class);
+
+ private final String catalogName;
+ private final MetastoreProperties metastoreProperties;
+ private final Map<StorageProperties.Type, StorageProperties>
storagePropertiesMap;
+
+ private String warehouseLocation;
+
+ public CatalogConnectivityTestCoordinator(
+ String catalogName,
+ MetastoreProperties metastoreProperties,
+ Map<StorageProperties.Type, StorageProperties>
storagePropertiesMap) {
+ this.catalogName = catalogName;
+ this.metastoreProperties = metastoreProperties;
+ this.storagePropertiesMap = storagePropertiesMap;
+ }
+
+ /**
+ * Run all connectivity tests for the catalog.
+ *
+ * @throws DdlException if any test fails
+ */
+ public void runTests() throws DdlException {
+ // 1. Test metadata service
+ testMetadataService();
+
+ // 2. Test object storage for warehouse (if applicable)
+ StorageProperties testObjectStorageProperties =
getTestObjectStorageProperties();
+ if (testObjectStorageProperties != null) {
+ testObjectStorageForWarehouse(testObjectStorageProperties);
+ }
+
+ // 3. Test explicitly configured HDFS (if applicable)
+ if (shouldTestHdfs()) {
+ testExplicitlyConfiguredHdfs();
+ }
+ }
+
+ /**
+ * Test metadata service connectivity (HMS, Glue, REST).
+ * Also stores the warehouse location to class variable for later use.
+ *
+ * @throws DdlException if test fails
+ */
+ private void testMetadataService() throws DdlException {
+ MetaConnectivityTester metaTester =
createMetaTester(metastoreProperties);
+
+ LOG.info("Testing {} connectivity for catalog '{}'",
metaTester.getTestType(), catalogName);
+
+ try {
+ metaTester.testConnection();
+ } catch (Exception e) {
+ throw new DdlException(metaTester.getTestType() + " connectivity
test failed: "
+ + e.getMessage());
+ }
+
+ // Store warehouse location for later use
+ this.warehouseLocation = metaTester.getTestLocation();
+ if (StringUtils.isNotBlank(this.warehouseLocation)) {
+ LOG.debug("Got warehouse location from metadata service: {}",
this.warehouseLocation);
+ }
+ }
+
+ /**
+ * Check if object storage test should be performed.
+ * Also caches the matched storage for later use in
testObjectStorageForWarehouse().
+ */
+ private StorageProperties getTestObjectStorageProperties() {
+ if (StringUtils.isBlank(this.warehouseLocation)) {
+ LOG.debug("Skip object storage test: no warehouse location from
metadata service for catalog '{}'",
+ catalogName);
+ return null;
+ }
+
+ StorageProperties matchedObjectStorage =
findMatchingObjectStorage(this.warehouseLocation);
+ if (matchedObjectStorage == null) {
+ LOG.debug("Skip object storage test: no storage configured for
warehouse '{}' in catalog '{}'",
+ this.warehouseLocation, catalogName);
+ return null;
+ }
+
+ return matchedObjectStorage;
+ }
+
+ /**
+ * Test object storage that matches the warehouse location from metadata
service.
+ * Uses the cached matchedObjectStorage from shouldTestObjectStorage().
+ *
+ * @throws DdlException if test fails
+ */
+ private void testObjectStorageForWarehouse(StorageProperties
testObjectStorageProperties) throws DdlException {
+ LOG.info("Testing {} connectivity for warehouse '{}' in catalog '{}'",
+ testObjectStorageProperties.getStorageName(),
this.warehouseLocation, catalogName);
+
+ StorageConnectivityTester tester =
createStorageTester(testObjectStorageProperties, this.warehouseLocation);
+
+ // Test FE connection
+ try {
+ tester.testFeConnection();
+ } catch (Exception e) {
+ throw new DdlException(tester.getTestType() + " connectivity test
failed: " + e.getMessage());
+ }
+
+ // Test BE connection
+ try {
+ tester.testBeConnection();
+ } catch (Exception e) {
+ throw new DdlException(tester.getTestType()
+ + " connectivity test failed (compute node): " +
e.getMessage());
+ }
+ }
+
+ /**
+ * Find object storage that can handle the given warehouse location.
+ *
+ * @param warehouse warehouse location
+ * @return matching storage properties, or null if not found
+ */
+ private StorageProperties findMatchingObjectStorage(String warehouse) {
+ // Check S3/Minio
+ if (warehouse.startsWith("s3://") || warehouse.startsWith("s3a://")) {
+ // Priority: Minio > S3 (if Minio is configured, use it for s3://)
+ StorageProperties minio =
storagePropertiesMap.get(StorageProperties.Type.MINIO);
+ if (minio != null && isConfiguredStorage(minio)) {
+ return minio;
+ }
+
+ StorageProperties s3 =
storagePropertiesMap.get(StorageProperties.Type.S3);
+ if (s3 != null && isConfiguredStorage(s3)) {
+ return s3;
+ }
+ }
+ return null;
+ }
+
+ /**
+ * Check if storage has credentials configured.
+ * Check for access key, IAM role, or other authentication methods.
+ */
+ private boolean isConfiguredStorage(StorageProperties storage) {
+ // For S3: check access key or IAM role
+ if (storage instanceof S3Properties) {
+ S3Properties s3 = (S3Properties) storage;
+ return StringUtils.isNotBlank(s3.getAccessKey())
+ || StringUtils.isNotBlank(s3.getS3IAMRole());
+ }
+
+ // For Minio: check access key
+ if (storage instanceof MinioProperties) {
+ MinioProperties minio = (MinioProperties) storage;
+ return StringUtils.isNotBlank(minio.getAccessKey());
+ }
+
+ // For other storage types, assume configured if exists
+ return true;
+ }
+
+ /**
+ * Check if HDFS test should be performed.
+ */
+ private boolean shouldTestHdfs() {
+ StorageProperties hdfsStorage =
storagePropertiesMap.get(StorageProperties.Type.HDFS);
+ if (!(hdfsStorage instanceof HdfsProperties)) {
+ return false;
+ }
+
+ HdfsProperties hdfs = (HdfsProperties) hdfsStorage;
+
+ if (!hdfs.isExplicitlyConfigured()) {
+ LOG.debug("Skip HDFS test: not explicitly configured by user for
catalog '{}'", catalogName);
+ return false;
+ }
+
+ if (StringUtils.isBlank(hdfs.getDefaultFS())) {
+ LOG.debug("Skip HDFS test: fs.defaultFS not configured for catalog
'{}'", catalogName);
+ return false;
+ }
+
+ return true;
+ }
+
+ /**
+ * Test explicitly configured HDFS.
+ *
+ * @throws DdlException if test fails
+ */
+ private void testExplicitlyConfiguredHdfs() throws DdlException {
+ HdfsProperties hdfs = (HdfsProperties)
storagePropertiesMap.get(StorageProperties.Type.HDFS);
+ String defaultFS = hdfs.getDefaultFS();
+
+ LOG.info("Testing HDFS connectivity for '{}' in catalog '{}'",
defaultFS, catalogName);
+
+ StorageConnectivityTester tester = createStorageTester(hdfs,
defaultFS);
+
+ // Test FE connection
+ try {
+ tester.testFeConnection();
+ } catch (Exception e) {
+ throw new DdlException("HDFS connectivity test failed: " +
e.getMessage());
+ }
+
+ // Test BE connection
+ try {
+ tester.testBeConnection();
+ } catch (Exception e) {
+ throw new DdlException("HDFS connectivity test failed (compute
node): " + e.getMessage());
+ }
+ }
+
+ /**
+ * Create metadata connectivity tester based on properties type.
+ */
+ private MetaConnectivityTester createMetaTester(MetastoreProperties props)
{
+ // Hive HMS
+ if (props instanceof HiveHMSProperties) {
+ HiveHMSProperties hiveProps = (HiveHMSProperties) props;
+ return new HiveHMSConnectivityTester(hiveProps,
hiveProps.getHmsBaseProperties());
+ }
+
+ // Hive Glue
+ if (props instanceof HiveGlueMetaStoreProperties) {
+ HiveGlueMetaStoreProperties glueProps =
(HiveGlueMetaStoreProperties) props;
+ return new HiveGlueMetaStoreConnectivityTester(glueProps,
glueProps.getBaseProperties());
+ }
+
+ // Iceberg HMS
+ if (props instanceof IcebergHMSMetaStoreProperties) {
+ IcebergHMSMetaStoreProperties icebergHms =
(IcebergHMSMetaStoreProperties) props;
+ return new IcebergHMSConnectivityTester(icebergHms,
icebergHms.getHmsBaseProperties());
+ }
+
+ // Iceberg Glue
+ if (props instanceof IcebergGlueMetaStoreProperties) {
+ IcebergGlueMetaStoreProperties icebergGlue =
(IcebergGlueMetaStoreProperties) props;
+ return new IcebergGlueMetaStoreConnectivityTester(icebergGlue,
icebergGlue.getGlueProperties());
+ }
+
+ // Iceberg REST
+ if (props instanceof IcebergRestProperties) {
+ return new IcebergRestConnectivityTester((IcebergRestProperties)
props);
+ }
+
+ // Iceberg S3Table
+ if (props instanceof IcebergS3TablesMetaStoreProperties) {
+ return new
IcebergS3TablesMetaStoreConnectivityTester((IcebergS3TablesMetaStoreProperties)
props);
+ }
+
+ // Default: no-op tester
+ return new MetaConnectivityTester() {
+ };
+ }
+
+ /**
+ * Create storage connectivity tester based on properties type and
location.
+ */
+ private StorageConnectivityTester createStorageTester(StorageProperties
props, String location) {
+ // S3
+ if (props instanceof S3Properties) {
+ return new S3ConnectivityTester((S3Properties) props, location);
+ }
+
+ // Minio
+ if (props instanceof MinioProperties) {
+ return new MinioConnectivityTester((MinioProperties) props,
location);
+ }
+
+ // HDFS
+ if (props instanceof HdfsProperties) {
+ return new HdfsConnectivityTester((HdfsProperties) props);
+ }
+
+ // Default: no-op tester
+ return new StorageConnectivityTester() {
+ };
+ }
+}
diff --git
a/fe/fe-core/src/main/java/org/apache/doris/datasource/connectivity/HMSBaseConnectivityTester.java
b/fe/fe-core/src/main/java/org/apache/doris/datasource/connectivity/HMSBaseConnectivityTester.java
new file mode 100644
index 00000000000..a67900eb00d
--- /dev/null
+++
b/fe/fe-core/src/main/java/org/apache/doris/datasource/connectivity/HMSBaseConnectivityTester.java
@@ -0,0 +1,62 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements. See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership. The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License. You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied. See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.doris.datasource.connectivity;
+
+import org.apache.doris.datasource.property.metastore.HMSBaseProperties;
+
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.metastore.HiveMetaHookLoader;
+import org.apache.hadoop.hive.metastore.IMetaStoreClient;
+import org.apache.hadoop.hive.metastore.RetryingMetaStoreClient;
+
+public class HMSBaseConnectivityTester implements MetaConnectivityTester {
+ private static final HiveMetaHookLoader DUMMY_HOOK_LOADER = t -> null;
+ private final HMSBaseProperties properties;
+
+ public HMSBaseConnectivityTester(HMSBaseProperties properties) {
+ this.properties = properties;
+ }
+
+ @Override
+ public String getTestType() {
+ return "HMS";
+ }
+
+ @Override
+ public void testConnection() throws Exception {
+ HiveConf hiveConf = properties.getHiveConf();
+ IMetaStoreClient client = null;
+ try {
+ client = properties.getHmsAuthenticator()
+ .doAs(() -> RetryingMetaStoreClient.getProxy(hiveConf,
DUMMY_HOOK_LOADER,
+
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.class.getName()));
+
+ final IMetaStoreClient finalClient = client;
+ properties.getHmsAuthenticator()
+ .doAs(finalClient::getAllDatabases);
+ } finally {
+ if (client != null) {
+ try {
+ client.close();
+ } catch (Exception ignored) {
+ // ignore
+ }
+ }
+ }
+ }
+}
diff --git
a/fe/fe-core/src/main/java/org/apache/doris/datasource/connectivity/HdfsCompatibleConnectivityTester.java
b/fe/fe-core/src/main/java/org/apache/doris/datasource/connectivity/HdfsCompatibleConnectivityTester.java
new file mode 100644
index 00000000000..481017463d8
--- /dev/null
+++
b/fe/fe-core/src/main/java/org/apache/doris/datasource/connectivity/HdfsCompatibleConnectivityTester.java
@@ -0,0 +1,51 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements. See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership. The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License. You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied. See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.doris.datasource.connectivity;
+
+import org.apache.doris.datasource.property.storage.HdfsCompatibleProperties;
+import org.apache.doris.thrift.TStorageBackendType;
+
+public abstract class HdfsCompatibleConnectivityTester implements
StorageConnectivityTester {
+ protected final HdfsCompatibleProperties properties;
+
+ public HdfsCompatibleConnectivityTester(HdfsCompatibleProperties
properties) {
+ this.properties = properties;
+ }
+
+ @Override
+ public TStorageBackendType getStorageType() {
+ return TStorageBackendType.HDFS;
+ }
+
+ @Override
+ public String getTestType() {
+ return "HDFS";
+ }
+
+ @Override
+ public void testFeConnection() throws Exception {
+ // TODO: Implement HDFS connectivity test in the future if needed
+ // Currently, HDFS connectivity test is not required
+ }
+
+ @Override
+ public void testBeConnection() throws Exception {
+ // TODO: Implement HDFS connectivity test in the future if needed
+ // Currently, HDFS connectivity test is not required
+ }
+}
diff --git
a/fe/fe-core/src/main/java/org/apache/doris/datasource/connectivity/HdfsConnectivityTester.java
b/fe/fe-core/src/main/java/org/apache/doris/datasource/connectivity/HdfsConnectivityTester.java
new file mode 100644
index 00000000000..15668b7fdd8
--- /dev/null
+++
b/fe/fe-core/src/main/java/org/apache/doris/datasource/connectivity/HdfsConnectivityTester.java
@@ -0,0 +1,26 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements. See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership. The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License. You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied. See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.doris.datasource.connectivity;
+
+import org.apache.doris.datasource.property.storage.HdfsCompatibleProperties;
+
+public class HdfsConnectivityTester extends HdfsCompatibleConnectivityTester {
+ public HdfsConnectivityTester(HdfsCompatibleProperties properties) {
+ super(properties);
+ }
+}
diff --git
a/fe/fe-core/src/main/java/org/apache/doris/datasource/connectivity/HiveGlueMetaStoreConnectivityTester.java
b/fe/fe-core/src/main/java/org/apache/doris/datasource/connectivity/HiveGlueMetaStoreConnectivityTester.java
new file mode 100644
index 00000000000..87edad2c8be
--- /dev/null
+++
b/fe/fe-core/src/main/java/org/apache/doris/datasource/connectivity/HiveGlueMetaStoreConnectivityTester.java
@@ -0,0 +1,41 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements. See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership. The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License. You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied. See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.doris.datasource.connectivity;
+
+import
org.apache.doris.datasource.property.metastore.AWSGlueMetaStoreBaseProperties;
+import org.apache.doris.datasource.property.metastore.AbstractHiveProperties;
+
+public class HiveGlueMetaStoreConnectivityTester extends
AbstractHiveConnectivityTester {
+ private final AWSGlueMetaStoreBaseConnectivityTester glueTester;
+
+ public HiveGlueMetaStoreConnectivityTester(AbstractHiveProperties
properties,
+ AWSGlueMetaStoreBaseProperties awsGlueMetaStoreBaseProperties) {
+ super(properties);
+ this.glueTester = new
AWSGlueMetaStoreBaseConnectivityTester(awsGlueMetaStoreBaseProperties);
+ }
+
+ @Override
+ public String getTestType() {
+ return "Hive Glue";
+ }
+
+ @Override
+ public void testConnection() throws Exception {
+ glueTester.testConnection();
+ }
+}
diff --git
a/fe/fe-core/src/main/java/org/apache/doris/datasource/connectivity/HiveHMSConnectivityTester.java
b/fe/fe-core/src/main/java/org/apache/doris/datasource/connectivity/HiveHMSConnectivityTester.java
new file mode 100644
index 00000000000..367a2428bda
--- /dev/null
+++
b/fe/fe-core/src/main/java/org/apache/doris/datasource/connectivity/HiveHMSConnectivityTester.java
@@ -0,0 +1,40 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements. See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership. The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License. You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied. See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.doris.datasource.connectivity;
+
+import org.apache.doris.datasource.property.metastore.AbstractHiveProperties;
+import org.apache.doris.datasource.property.metastore.HMSBaseProperties;
+
+public class HiveHMSConnectivityTester extends AbstractHiveConnectivityTester {
+ private final HMSBaseConnectivityTester hmsTester;
+
+ public HiveHMSConnectivityTester(AbstractHiveProperties properties,
HMSBaseProperties hmsBaseProperties) {
+ super(properties);
+ this.hmsTester = new HMSBaseConnectivityTester(hmsBaseProperties);
+ }
+
+ @Override
+ public String getTestType() {
+ return "Hive HMS";
+ }
+
+ @Override
+ public void testConnection() throws Exception {
+ hmsTester.testConnection();
+ }
+}
diff --git
a/fe/fe-core/src/main/java/org/apache/doris/datasource/connectivity/IcebergGlueMetaStoreConnectivityTester.java
b/fe/fe-core/src/main/java/org/apache/doris/datasource/connectivity/IcebergGlueMetaStoreConnectivityTester.java
new file mode 100644
index 00000000000..3a543d194a7
--- /dev/null
+++
b/fe/fe-core/src/main/java/org/apache/doris/datasource/connectivity/IcebergGlueMetaStoreConnectivityTester.java
@@ -0,0 +1,41 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements. See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership. The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License. You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied. See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.doris.datasource.connectivity;
+
+import
org.apache.doris.datasource.property.metastore.AWSGlueMetaStoreBaseProperties;
+import
org.apache.doris.datasource.property.metastore.AbstractIcebergProperties;
+
+public class IcebergGlueMetaStoreConnectivityTester extends
AbstractIcebergConnectivityTester {
+ private final AWSGlueMetaStoreBaseConnectivityTester glueTester;
+
+ public IcebergGlueMetaStoreConnectivityTester(AbstractIcebergProperties
properties,
+ AWSGlueMetaStoreBaseProperties awsGlueMetaStoreBaseProperties) {
+ super(properties);
+ this.glueTester = new
AWSGlueMetaStoreBaseConnectivityTester(awsGlueMetaStoreBaseProperties);
+ }
+
+ @Override
+ public String getTestType() {
+ return "Iceberg Glue";
+ }
+
+ @Override
+ public void testConnection() throws Exception {
+ glueTester.testConnection();
+ }
+}
diff --git
a/fe/fe-core/src/main/java/org/apache/doris/datasource/connectivity/IcebergHMSConnectivityTester.java
b/fe/fe-core/src/main/java/org/apache/doris/datasource/connectivity/IcebergHMSConnectivityTester.java
new file mode 100644
index 00000000000..04ff5ea385c
--- /dev/null
+++
b/fe/fe-core/src/main/java/org/apache/doris/datasource/connectivity/IcebergHMSConnectivityTester.java
@@ -0,0 +1,40 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements. See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership. The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License. You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied. See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.doris.datasource.connectivity;
+
+import
org.apache.doris.datasource.property.metastore.AbstractIcebergProperties;
+import org.apache.doris.datasource.property.metastore.HMSBaseProperties;
+
+public class IcebergHMSConnectivityTester extends
AbstractIcebergConnectivityTester {
+ private final HMSBaseConnectivityTester hmsTester;
+
+ public IcebergHMSConnectivityTester(AbstractIcebergProperties properties,
HMSBaseProperties hmsBaseProperties) {
+ super(properties);
+ this.hmsTester = new HMSBaseConnectivityTester(hmsBaseProperties);
+ }
+
+ @Override
+ public String getTestType() {
+ return "Iceberg HMS";
+ }
+
+ @Override
+ public void testConnection() throws Exception {
+ hmsTester.testConnection();
+ }
+}
diff --git
a/fe/fe-core/src/main/java/org/apache/doris/datasource/connectivity/IcebergRestConnectivityTester.java
b/fe/fe-core/src/main/java/org/apache/doris/datasource/connectivity/IcebergRestConnectivityTester.java
new file mode 100644
index 00000000000..a9659d60d2d
--- /dev/null
+++
b/fe/fe-core/src/main/java/org/apache/doris/datasource/connectivity/IcebergRestConnectivityTester.java
@@ -0,0 +1,85 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements. See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership. The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License. You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied. See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.doris.datasource.connectivity;
+
+import
org.apache.doris.datasource.property.metastore.AbstractIcebergProperties;
+import org.apache.doris.datasource.property.metastore.IcebergRestProperties;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.iceberg.CatalogProperties;
+import org.apache.iceberg.rest.RESTSessionCatalog;
+
+import java.util.Map;
+import java.util.regex.Pattern;
+
+public class IcebergRestConnectivityTester extends
AbstractIcebergConnectivityTester {
+ // For Polaris REST catalog compatibility
+ private static final String DEFAULT_BASE_LOCATION =
"default-base-location";
+ private static final Pattern LOCATION_PATTERN =
Pattern.compile("^(s3|s3a)://.+");
+
+ private String warehouseLocation;
+
+ public IcebergRestConnectivityTester(AbstractIcebergProperties properties)
{
+ super(properties);
+ }
+
+ @Override
+ public String getTestType() {
+ return "Iceberg REST";
+ }
+
+ @Override
+ public void testConnection() throws Exception {
+ Map<String, String> restProps = ((IcebergRestProperties)
properties).getIcebergRestCatalogProperties();
+
+ try (RESTSessionCatalog catalog = new RESTSessionCatalog()) {
+ catalog.initialize("connectivity-test", restProps);
+
+ Map<String, String> mergedProps = catalog.properties();
+ String location =
mergedProps.get(CatalogProperties.WAREHOUSE_LOCATION);
+ this.warehouseLocation = validateLocation(location);
+ if (this.warehouseLocation == null) {
+ location = mergedProps.get(DEFAULT_BASE_LOCATION);
+ this.warehouseLocation = validateLocation(location);
+ }
+ }
+ }
+
+ @Override
+ public String getTestLocation() {
+ // First try to use configured warehouse
+ String location = validateLocation(properties.getWarehouse());
+ if (location != null) {
+ return location;
+ }
+ // If configured warehouse is not valid, fallback to REST API
warehouse (already validated)
+ return this.warehouseLocation;
+ }
+
+ /**
+ * Validate if the given location is a valid storage URI.
+ * This method is specific to IcebergRestConnectivityTester because it
needs to
+ * validate warehouse locations returned from REST API.
+ */
+ private String validateLocation(String location) {
+ if (StringUtils.isNotBlank(location) &&
LOCATION_PATTERN.matcher(location).matches()) {
+ return location;
+ }
+ return null;
+ }
+}
diff --git
a/fe/fe-core/src/main/java/org/apache/doris/datasource/connectivity/IcebergS3TablesMetaStoreConnectivityTester.java
b/fe/fe-core/src/main/java/org/apache/doris/datasource/connectivity/IcebergS3TablesMetaStoreConnectivityTester.java
new file mode 100644
index 00000000000..c4444839771
--- /dev/null
+++
b/fe/fe-core/src/main/java/org/apache/doris/datasource/connectivity/IcebergS3TablesMetaStoreConnectivityTester.java
@@ -0,0 +1,32 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements. See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership. The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License. You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied. See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.doris.datasource.connectivity;
+
+import
org.apache.doris.datasource.property.metastore.AbstractIcebergProperties;
+
+public class IcebergS3TablesMetaStoreConnectivityTester extends
AbstractIcebergConnectivityTester {
+ protected IcebergS3TablesMetaStoreConnectivityTester(
+ AbstractIcebergProperties properties) {
+ super(properties);
+ }
+
+ @Override
+ public void testConnection() throws Exception {
+ // TODO: Implement Iceberg S3 Tables connectivity test in the future
if needed
+ }
+}
diff --git
a/fe/fe-core/src/main/java/org/apache/doris/datasource/connectivity/MetaConnectivityTester.java
b/fe/fe-core/src/main/java/org/apache/doris/datasource/connectivity/MetaConnectivityTester.java
new file mode 100644
index 00000000000..2f2a9a34140
--- /dev/null
+++
b/fe/fe-core/src/main/java/org/apache/doris/datasource/connectivity/MetaConnectivityTester.java
@@ -0,0 +1,37 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements. See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership. The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License. You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied. See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.doris.datasource.connectivity;
+
+public interface MetaConnectivityTester {
+
+ default void testConnection() throws Exception {
+ // Default: test passes (no-op)
+ }
+
+ default String getTestLocation() {
+ return null;
+ }
+
+ /**
+ * Returns the type of meta connectivity test for error messages.
+ * Default implementation returns "Meta" for generic meta connectivity
test.
+ */
+ default String getTestType() {
+ return "Meta";
+ }
+}
diff --git
a/fe/fe-core/src/main/java/org/apache/doris/datasource/connectivity/MinioConnectivityTester.java
b/fe/fe-core/src/main/java/org/apache/doris/datasource/connectivity/MinioConnectivityTester.java
new file mode 100644
index 00000000000..6c9ada8c490
--- /dev/null
+++
b/fe/fe-core/src/main/java/org/apache/doris/datasource/connectivity/MinioConnectivityTester.java
@@ -0,0 +1,32 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements. See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership. The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License. You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied. See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.doris.datasource.connectivity;
+
+import org.apache.doris.datasource.property.storage.MinioProperties;
+
+public class MinioConnectivityTester extends
AbstractS3CompatibleConnectivityTester {
+
+ public MinioConnectivityTester(MinioProperties properties, String
testLocation) {
+ super(properties, testLocation);
+ }
+
+ @Override
+ public String getTestType() {
+ return "Minio";
+ }
+}
diff --git
a/fe/fe-core/src/main/java/org/apache/doris/datasource/connectivity/S3ConnectivityTester.java
b/fe/fe-core/src/main/java/org/apache/doris/datasource/connectivity/S3ConnectivityTester.java
new file mode 100644
index 00000000000..cc869d05e04
--- /dev/null
+++
b/fe/fe-core/src/main/java/org/apache/doris/datasource/connectivity/S3ConnectivityTester.java
@@ -0,0 +1,32 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements. See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership. The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License. You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied. See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.doris.datasource.connectivity;
+
+import org.apache.doris.datasource.property.storage.S3Properties;
+
+public class S3ConnectivityTester extends
AbstractS3CompatibleConnectivityTester {
+
+ public S3ConnectivityTester(S3Properties properties, String testLocation) {
+ super(properties, testLocation);
+ }
+
+ @Override
+ public String getTestType() {
+ return "S3";
+ }
+}
diff --git
a/fe/fe-core/src/main/java/org/apache/doris/datasource/connectivity/StorageConnectivityTester.java
b/fe/fe-core/src/main/java/org/apache/doris/datasource/connectivity/StorageConnectivityTester.java
new file mode 100644
index 00000000000..7f23c674f71
--- /dev/null
+++
b/fe/fe-core/src/main/java/org/apache/doris/datasource/connectivity/StorageConnectivityTester.java
@@ -0,0 +1,99 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements. See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership. The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License. You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied. See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.doris.datasource.connectivity;
+
+import org.apache.doris.catalog.Env;
+import org.apache.doris.common.ClientPool;
+import org.apache.doris.system.Backend;
+import org.apache.doris.thrift.BackendService;
+import org.apache.doris.thrift.TNetworkAddress;
+import org.apache.doris.thrift.TStatusCode;
+import org.apache.doris.thrift.TStorageBackendType;
+import org.apache.doris.thrift.TTestStorageConnectivityRequest;
+import org.apache.doris.thrift.TTestStorageConnectivityResponse;
+
+import java.util.Collections;
+import java.util.List;
+import java.util.Map;
+
+public interface StorageConnectivityTester {
+
+ default void testFeConnection() throws Exception {
+ // Default: test passes (no-op)
+ }
+
+ default TStorageBackendType getStorageType() {
+ return null;
+ }
+
+ default Map<String, String> getBackendProperties() {
+ return Collections.emptyMap();
+ }
+
+ /**
+ * Returns the type of storage connectivity test for error messages.
+ * Default implementation returns "Storage" for generic storage
connectivity test.
+ */
+ default String getTestType() {
+ return "Storage";
+ }
+
+ default void testBeConnection() throws Exception {
+ List<Long> aliveBeIds =
Env.getCurrentSystemInfo().getAllBackendIds(true);
+ if (aliveBeIds.isEmpty()) {
+ // no alive BE, skip the test
+ return;
+ }
+
+ Collections.shuffle(aliveBeIds);
+ testBeConnectionInternal(aliveBeIds.get(0));
+ }
+
+ default void testBeConnectionInternal(long backendId) throws Exception {
+ Backend backend = Env.getCurrentSystemInfo().getBackend(backendId);
+ if (backend == null) {
+ // backend not found, skip the test
+ return;
+ }
+
+ TTestStorageConnectivityRequest request = new
TTestStorageConnectivityRequest();
+ request.setType(getStorageType());
+ request.setProperties(getBackendProperties());
+
+ TNetworkAddress address = new TNetworkAddress(backend.getHost(),
backend.getBePort());
+ BackendService.Client client = null;
+ boolean ok = false;
+ try {
+ client = ClientPool.backendPool.borrowObject(address);
+ TTestStorageConnectivityResponse response =
client.testStorageConnectivity(request);
+
+ if (response.status.getStatusCode() != TStatusCode.OK) {
+ String errMsg = response.status.isSetErrorMsgs() &&
!response.status.getErrorMsgs().isEmpty()
+ ? response.status.getErrorMsgs().get(0) : "Unknown
error";
+ throw new Exception(errMsg);
+ }
+ ok = true;
+ } finally {
+ if (ok) {
+ ClientPool.backendPool.returnObject(address, client);
+ } else {
+ ClientPool.backendPool.invalidateObject(address, client);
+ }
+ }
+ }
+}
diff --git
a/fe/fe-core/src/main/java/org/apache/doris/datasource/property/metastore/AWSGlueMetaStoreBaseProperties.java
b/fe/fe-core/src/main/java/org/apache/doris/datasource/property/metastore/AWSGlueMetaStoreBaseProperties.java
index 762b2808423..ffbbc8fdb30 100644
---
a/fe/fe-core/src/main/java/org/apache/doris/datasource/property/metastore/AWSGlueMetaStoreBaseProperties.java
+++
b/fe/fe-core/src/main/java/org/apache/doris/datasource/property/metastore/AWSGlueMetaStoreBaseProperties.java
@@ -21,13 +21,30 @@ import
org.apache.doris.datasource.property.ConnectorPropertiesUtils;
import org.apache.doris.datasource.property.ConnectorProperty;
import org.apache.doris.datasource.property.ParamRules;
+import com.google.common.base.Strings;
+import lombok.Getter;
import org.apache.commons.lang3.StringUtils;
+import software.amazon.awssdk.auth.credentials.AwsBasicCredentials;
+import software.amazon.awssdk.auth.credentials.AwsCredentialsProvider;
+import software.amazon.awssdk.auth.credentials.AwsCredentialsProviderChain;
+import software.amazon.awssdk.auth.credentials.AwsSessionCredentials;
+import software.amazon.awssdk.auth.credentials.ContainerCredentialsProvider;
+import
software.amazon.awssdk.auth.credentials.EnvironmentVariableCredentialsProvider;
+import
software.amazon.awssdk.auth.credentials.InstanceProfileCredentialsProvider;
+import software.amazon.awssdk.auth.credentials.ProfileCredentialsProvider;
+import software.amazon.awssdk.auth.credentials.StaticCredentialsProvider;
+import
software.amazon.awssdk.auth.credentials.SystemPropertyCredentialsProvider;
+import
software.amazon.awssdk.auth.credentials.WebIdentityTokenFileCredentialsProvider;
+import software.amazon.awssdk.regions.Region;
+import software.amazon.awssdk.services.sts.StsClient;
+import
software.amazon.awssdk.services.sts.auth.StsAssumeRoleCredentialsProvider;
import java.util.Map;
import java.util.regex.Matcher;
import java.util.regex.Pattern;
public class AWSGlueMetaStoreBaseProperties {
+ @Getter
@ConnectorProperty(names = {"glue.endpoint", "aws.endpoint",
"aws.glue.endpoint"},
description = "The endpoint of the AWS Glue.")
protected String glueEndpoint = "";
@@ -36,6 +53,7 @@ public class AWSGlueMetaStoreBaseProperties {
description = "The region of the AWS Glue. "
+ "If not set, it will use the default region configured
in the AWS SDK or environment variables."
)
+ @Getter
protected String glueRegion = "";
/**
@@ -130,6 +148,53 @@ public class AWSGlueMetaStoreBaseProperties {
}
throw new IllegalArgumentException("Could not extract region from
endpoint: " + glueEndpoint);
}
+
+ /**
+ * Build AWS credentials provider for Glue client.
+ *
+ * @return AwsCredentialsProvider
+ */
+ public AwsCredentialsProvider getAwsCredentialsProvider() {
+ // If access key is configured, use it
+ if (StringUtils.isNotBlank(glueAccessKey) &&
StringUtils.isNotBlank(glueSecretKey)) {
+ if (Strings.isNullOrEmpty(glueSessionToken)) {
+ return
StaticCredentialsProvider.create(AwsBasicCredentials.create(glueAccessKey,
glueSecretKey));
+ } else {
+ return
StaticCredentialsProvider.create(AwsSessionCredentials.create(glueAccessKey,
glueSecretKey,
+ glueSessionToken));
+ }
+ }
+ // If IAM role is configured, use STS AssumeRole
+ if (StringUtils.isNotBlank(glueIAMRole)) {
+ StsClient stsClient = StsClient.builder()
+ .region(Region.of(glueRegion))
+ .credentialsProvider(AwsCredentialsProviderChain.of(
+ WebIdentityTokenFileCredentialsProvider.create(),
+ ContainerCredentialsProvider.create(),
+ InstanceProfileCredentialsProvider.create(),
+ SystemPropertyCredentialsProvider.create(),
+ EnvironmentVariableCredentialsProvider.create(),
+ ProfileCredentialsProvider.create()))
+ .build();
+
+ return StsAssumeRoleCredentialsProvider.builder()
+ .stsClient(stsClient)
+ .refreshRequest(builder -> {
+
builder.roleArn(glueIAMRole).roleSessionName("aws-glue-java-fe");
+ if (StringUtils.isNotBlank(glueExternalId)) {
+ builder.externalId(glueExternalId);
+ }
+ }).build();
+ }
+
+ return AwsCredentialsProviderChain.of(
+ WebIdentityTokenFileCredentialsProvider.create(),
+ ContainerCredentialsProvider.create(),
+ InstanceProfileCredentialsProvider.create(),
+ SystemPropertyCredentialsProvider.create(),
+ EnvironmentVariableCredentialsProvider.create(),
+ ProfileCredentialsProvider.create());
+ }
}
diff --git
a/fe/fe-core/src/main/java/org/apache/doris/datasource/property/metastore/AbstractIcebergProperties.java
b/fe/fe-core/src/main/java/org/apache/doris/datasource/property/metastore/AbstractIcebergProperties.java
index 359d3786f64..2cc829c8743 100644
---
a/fe/fe-core/src/main/java/org/apache/doris/datasource/property/metastore/AbstractIcebergProperties.java
+++
b/fe/fe-core/src/main/java/org/apache/doris/datasource/property/metastore/AbstractIcebergProperties.java
@@ -35,6 +35,7 @@ import java.util.Map;
*/
public abstract class AbstractIcebergProperties extends MetastoreProperties {
+ @Getter
@ConnectorProperty(
names = {CatalogProperties.WAREHOUSE_LOCATION},
required = false,
diff --git
a/fe/fe-core/src/main/java/org/apache/doris/datasource/property/metastore/HiveGlueMetaStoreProperties.java
b/fe/fe-core/src/main/java/org/apache/doris/datasource/property/metastore/HiveGlueMetaStoreProperties.java
index 940fc997011..5a90a3af7dc 100644
---
a/fe/fe-core/src/main/java/org/apache/doris/datasource/property/metastore/HiveGlueMetaStoreProperties.java
+++
b/fe/fe-core/src/main/java/org/apache/doris/datasource/property/metastore/HiveGlueMetaStoreProperties.java
@@ -21,6 +21,7 @@ import org.apache.doris.datasource.property.ConnectorProperty;
import com.amazonaws.ClientConfiguration;
import com.amazonaws.glue.catalog.util.AWSGlueConfig;
+import lombok.Getter;
import org.apache.hadoop.hive.conf.HiveConf;
import java.util.Map;
@@ -46,6 +47,7 @@ public class HiveGlueMetaStoreProperties extends
AbstractHiveProperties {
"aws.catalog.credentials.provider.factory.class";
// ========== Fields ==========
+ @Getter
private AWSGlueMetaStoreBaseProperties baseProperties;
@ConnectorProperty(names = {AWS_GLUE_MAX_RETRY_KEY},
diff --git
a/fe/fe-core/src/main/java/org/apache/doris/datasource/property/metastore/HiveHMSProperties.java
b/fe/fe-core/src/main/java/org/apache/doris/datasource/property/metastore/HiveHMSProperties.java
index 3810b29acb2..636759e33f6 100644
---
a/fe/fe-core/src/main/java/org/apache/doris/datasource/property/metastore/HiveHMSProperties.java
+++
b/fe/fe-core/src/main/java/org/apache/doris/datasource/property/metastore/HiveHMSProperties.java
@@ -21,6 +21,7 @@ import org.apache.doris.common.Config;
import
org.apache.doris.common.security.authentication.HadoopExecutionAuthenticator;
import org.apache.doris.datasource.property.ConnectorProperty;
+import lombok.Getter;
import lombok.extern.slf4j.Slf4j;
import org.apache.commons.lang3.BooleanUtils;
@@ -29,6 +30,7 @@ import java.util.Map;
@Slf4j
public class HiveHMSProperties extends AbstractHiveProperties {
+ @Getter
private HMSBaseProperties hmsBaseProperties;
@ConnectorProperty(names = {"hive.enable_hms_events_incremental_sync"},
@@ -59,7 +61,6 @@ public class HiveHMSProperties extends AbstractHiveProperties
{
this.executionAuthenticator = new
HadoopExecutionAuthenticator(hmsBaseProperties.getHmsAuthenticator());
}
-
private void initRefreshParams() {
this.hmsEventsIncrementalSyncEnabled =
BooleanUtils.toBoolean(hmsEventsIncrementalSyncEnabledInput);
this.hmsEventsBatchSizePerRpc = hmsEventisBatchSizePerRpcInput;
diff --git
a/fe/fe-core/src/main/java/org/apache/doris/datasource/property/metastore/IcebergGlueMetaStoreProperties.java
b/fe/fe-core/src/main/java/org/apache/doris/datasource/property/metastore/IcebergGlueMetaStoreProperties.java
index 9738feb5be8..ba40940005b 100644
---
a/fe/fe-core/src/main/java/org/apache/doris/datasource/property/metastore/IcebergGlueMetaStoreProperties.java
+++
b/fe/fe-core/src/main/java/org/apache/doris/datasource/property/metastore/IcebergGlueMetaStoreProperties.java
@@ -21,6 +21,7 @@ import
org.apache.doris.datasource.iceberg.IcebergExternalCatalog;
import org.apache.doris.datasource.property.storage.S3Properties;
import org.apache.doris.datasource.property.storage.StorageProperties;
+import lombok.Getter;
import org.apache.commons.lang3.StringUtils;
import org.apache.iceberg.CatalogProperties;
import org.apache.iceberg.aws.AwsProperties;
@@ -33,6 +34,7 @@ import java.util.Map;
public class IcebergGlueMetaStoreProperties extends AbstractIcebergProperties {
+ @Getter
public AWSGlueMetaStoreBaseProperties glueProperties;
public S3Properties s3Properties;
diff --git
a/fe/fe-core/src/main/java/org/apache/doris/datasource/property/metastore/IcebergHMSMetaStoreProperties.java
b/fe/fe-core/src/main/java/org/apache/doris/datasource/property/metastore/IcebergHMSMetaStoreProperties.java
index 289d682b23c..dc6b4b448ae 100644
---
a/fe/fe-core/src/main/java/org/apache/doris/datasource/property/metastore/IcebergHMSMetaStoreProperties.java
+++
b/fe/fe-core/src/main/java/org/apache/doris/datasource/property/metastore/IcebergHMSMetaStoreProperties.java
@@ -22,6 +22,7 @@ import
org.apache.doris.datasource.iceberg.IcebergExternalCatalog;
import org.apache.doris.datasource.property.ConnectorProperty;
import org.apache.doris.datasource.property.storage.StorageProperties;
+import lombok.Getter;
import org.apache.commons.lang3.exception.ExceptionUtils;
import org.apache.hadoop.conf.Configuration;
import org.apache.iceberg.catalog.Catalog;
@@ -47,6 +48,7 @@ public class IcebergHMSMetaStoreProperties extends
AbstractIcebergProperties {
+ "catalog, otherwise it will only list the tables that
are registered in the catalog.")
private boolean listAllTables = true;
+ @Getter
private HMSBaseProperties hmsBaseProperties;
@Override
diff --git
a/fe/fe-core/src/main/java/org/apache/doris/datasource/property/metastore/IcebergRestProperties.java
b/fe/fe-core/src/main/java/org/apache/doris/datasource/property/metastore/IcebergRestProperties.java
index 026f9c11533..fda6cd52300 100644
---
a/fe/fe-core/src/main/java/org/apache/doris/datasource/property/metastore/IcebergRestProperties.java
+++
b/fe/fe-core/src/main/java/org/apache/doris/datasource/property/metastore/IcebergRestProperties.java
@@ -24,6 +24,7 @@ import
org.apache.doris.datasource.property.storage.AbstractS3CompatibleProperti
import org.apache.doris.datasource.property.storage.StorageProperties;
import com.google.common.collect.Maps;
+import lombok.Getter;
import org.apache.commons.lang3.StringUtils;
import org.apache.hadoop.conf.Configuration;
import org.apache.iceberg.CatalogProperties;
@@ -48,6 +49,7 @@ public class IcebergRestProperties extends
AbstractIcebergProperties {
private Map<String, String> icebergRestCatalogProperties;
+ @Getter
@ConnectorProperty(names = {"iceberg.rest.uri", "uri"},
description = "The uri of the iceberg rest catalog service.")
private String icebergRestUri = "";
diff --git
a/fe/fe-core/src/main/java/org/apache/doris/datasource/property/storage/HdfsProperties.java
b/fe/fe-core/src/main/java/org/apache/doris/datasource/property/storage/HdfsProperties.java
index 99c170f8ce2..c8eadbdc3b6 100644
---
a/fe/fe-core/src/main/java/org/apache/doris/datasource/property/storage/HdfsProperties.java
+++
b/fe/fe-core/src/main/java/org/apache/doris/datasource/property/storage/HdfsProperties.java
@@ -23,6 +23,7 @@ import org.apache.doris.datasource.property.ConnectorProperty;
import com.google.common.base.Strings;
import com.google.common.collect.ImmutableSet;
+import lombok.Getter;
import org.apache.commons.collections.MapUtils;
import org.apache.commons.lang3.StringUtils;
import org.apache.hadoop.conf.Configuration;
@@ -77,6 +78,14 @@ public class HdfsProperties extends HdfsCompatibleProperties
{
private String dfsNameServices;
+ /**
+ * Whether this HDFS storage is explicitly configured by user.
+ * If false, this instance is auto-created by framework as a fallback
storage,
+ * and should skip connectivity test.
+ */
+ @Getter
+ private final boolean explicitlyConfigured;
+
private static final String DFS_NAME_SERVICES_KEY = "dfs.nameservices";
private static final Set<String> supportSchema = ImmutableSet.of("hdfs",
"viewfs");
@@ -96,7 +105,12 @@ public class HdfsProperties extends
HdfsCompatibleProperties {
"hdfs.config.resources");
public HdfsProperties(Map<String, String> origProps) {
+ this(origProps, true);
+ }
+
+ public HdfsProperties(Map<String, String> origProps, boolean
explicitlyConfigured) {
super(Type.HDFS, origProps);
+ this.explicitlyConfigured = explicitlyConfigured;
}
public static boolean guessIsMe(Map<String, String> props) {
@@ -199,4 +213,7 @@ public class HdfsProperties extends
HdfsCompatibleProperties {
return "HDFS";
}
+ public String getDefaultFS() {
+ return fsDefaultFS;
+ }
}
diff --git
a/fe/fe-core/src/main/java/org/apache/doris/datasource/property/storage/S3Properties.java
b/fe/fe-core/src/main/java/org/apache/doris/datasource/property/storage/S3Properties.java
index 47d0301ad9b..6dde222036d 100644
---
a/fe/fe-core/src/main/java/org/apache/doris/datasource/property/storage/S3Properties.java
+++
b/fe/fe-core/src/main/java/org/apache/doris/datasource/property/storage/S3Properties.java
@@ -158,6 +158,7 @@ public class S3Properties extends
AbstractS3CompatibleProperties {
description = "The sts region of S3.")
protected String s3StsRegion = "";
+ @Getter
@ConnectorProperty(names = {"s3.role_arn", "AWS_ROLE_ARN",
"glue.role_arn"},
required = false,
description = "The iam role of S3.")
diff --git
a/fe/fe-core/src/main/java/org/apache/doris/datasource/property/storage/StorageProperties.java
b/fe/fe-core/src/main/java/org/apache/doris/datasource/property/storage/StorageProperties.java
index 1dbd417de64..6e4b55a5e74 100644
---
a/fe/fe-core/src/main/java/org/apache/doris/datasource/property/storage/StorageProperties.java
+++
b/fe/fe-core/src/main/java/org/apache/doris/datasource/property/storage/StorageProperties.java
@@ -130,8 +130,9 @@ public abstract class StorageProperties extends
ConnectionProperties {
result.add(p);
}
}
+ // Add default HDFS storage if not explicitly configured
if (result.stream().noneMatch(HdfsProperties.class::isInstance)) {
- result.add(new HdfsProperties(origProps));
+ result.add(new HdfsProperties(origProps, false));
}
for (StorageProperties storageProperties : result) {
diff --git
a/fe/fe-core/src/test/java/org/apache/doris/common/GenericPoolTest.java
b/fe/fe-core/src/test/java/org/apache/doris/common/GenericPoolTest.java
index c5361faa517..e761f07a90e 100644
--- a/fe/fe-core/src/test/java/org/apache/doris/common/GenericPoolTest.java
+++ b/fe/fe-core/src/test/java/org/apache/doris/common/GenericPoolTest.java
@@ -240,6 +240,12 @@ public class GenericPoolTest {
public TDictionaryStatusList getDictionaryStatus(List<Long>
dictionaryIds) throws TException {
return null;
}
+
+ @Override
+ public org.apache.doris.thrift.TTestStorageConnectivityResponse
testStorageConnectivity(
+ org.apache.doris.thrift.TTestStorageConnectivityRequest
request) throws TException {
+ return null;
+ }
}
@Test
diff --git
a/fe/fe-core/src/test/java/org/apache/doris/utframe/MockedBackendFactory.java
b/fe/fe-core/src/test/java/org/apache/doris/utframe/MockedBackendFactory.java
index eab542842a2..0d4a109921a 100644
---
a/fe/fe-core/src/test/java/org/apache/doris/utframe/MockedBackendFactory.java
+++
b/fe/fe-core/src/test/java/org/apache/doris/utframe/MockedBackendFactory.java
@@ -469,6 +469,15 @@ public class MockedBackendFactory {
public TDictionaryStatusList getDictionaryStatus(List<Long>
dictionaryIds) throws TException {
return null;
}
+
+ @Override
+ public org.apache.doris.thrift.TTestStorageConnectivityResponse
testStorageConnectivity(
+ org.apache.doris.thrift.TTestStorageConnectivityRequest
request) throws TException {
+ org.apache.doris.thrift.TTestStorageConnectivityResponse response =
+ new
org.apache.doris.thrift.TTestStorageConnectivityResponse();
+ response.setStatus(new TStatus(TStatusCode.OK));
+ return response;
+ }
}
// The default Brpc service.
diff --git a/gensrc/thrift/BackendService.thrift
b/gensrc/thrift/BackendService.thrift
index 0e450c35f58..684f2404fcb 100644
--- a/gensrc/thrift/BackendService.thrift
+++ b/gensrc/thrift/BackendService.thrift
@@ -369,6 +369,15 @@ struct TDictionaryStatusList {
1: optional list<TDictionaryStatus> dictionary_status_list
}
+struct TTestStorageConnectivityRequest {
+ 1: optional Types.TStorageBackendType type;
+ 2: optional map<string, string> properties;
+}
+
+struct TTestStorageConnectivityResponse {
+ 1: optional Status.TStatus status;
+}
+
service BackendService {
AgentService.TAgentResult
submit_tasks(1:list<AgentService.TAgentTaskRequest> tasks);
@@ -419,4 +428,7 @@ service BackendService {
// if empty, return all dictionary status.
TDictionaryStatusList get_dictionary_status(1:list<i64> dictionary_ids);
+
+ // Test storage connectivity (S3, HDFS, etc.)
+ TTestStorageConnectivityResponse
test_storage_connectivity(1:TTestStorageConnectivityRequest request);
}
diff --git
a/regression-test/suites/external_table_p0/test_connection/test_iceberg_rest_minio_connectivity.groovy
b/regression-test/suites/external_table_p0/test_connection/test_iceberg_rest_minio_connectivity.groovy
new file mode 100644
index 00000000000..eef7ddd9f60
--- /dev/null
+++
b/regression-test/suites/external_table_p0/test_connection/test_iceberg_rest_minio_connectivity.groovy
@@ -0,0 +1,105 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements. See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership. The ASF licenses this file to you under
+// the Apache License, Version 2.0 (the "License"); you may not
+// use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied. See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+suite("test_iceberg_rest_minio_connectivity",
"p0,external,iceberg,external_docker,external_docker_polaris") {
+
+ String enabled = context.config.otherConfigs.get("enableIcebergTest")
+ if (enabled == null || !enabled.equalsIgnoreCase("true")) {
+ return
+ }
+
+ String externalEnvIp = context.config.otherConfigs.get("externalEnvIp")
+ String polaris_port =
context.config.otherConfigs.get("polaris_rest_uri_port")
+ String minio_port = context.config.otherConfigs.get("polaris_minio_port")
+
+ // ========== Test Meta Connectivity Failure ==========
+ // Test with invalid REST URI
+ def test_meta_fail_catalog = "test_meta_connectivity_fail"
+ sql """DROP CATALOG IF EXISTS ${test_meta_fail_catalog}"""
+
+ test {
+ sql """
+ CREATE CATALOG ${test_meta_fail_catalog} PROPERTIES (
+ 'type'='iceberg',
+ 'iceberg.catalog.type'='rest',
+ 'iceberg.rest.uri' =
'http://${externalEnvIp}:9999/invalid/path',
+ 'warehouse' = 'test_warehouse',
+ 's3.access_key' = 'admin',
+ 's3.secret_key' = 'password',
+ 's3.endpoint' = 'http://${externalEnvIp}:${minio_port}',
+ 's3.region' = 'us-east-1',
+ 'test_connection' = 'true'
+ );
+ """
+ exception "connectivity test failed"
+ exception "Iceberg REST"
+ }
+
+ // ========== Test Storage Connectivity Failure ==========
+ // Test with invalid MinIO credentials
+ def test_storage_fail_catalog = "test_storage_connectivity_fail"
+ sql """DROP CATALOG IF EXISTS ${test_storage_fail_catalog}"""
+
+ test {
+ sql """
+ CREATE CATALOG ${test_storage_fail_catalog} PROPERTIES (
+ 'type'='iceberg',
+ 'iceberg.catalog.type'='rest',
+ 'iceberg.rest.uri' =
'http://${externalEnvIp}:${polaris_port}/api/catalog',
+ 'iceberg.rest.security.type' = 'oauth2',
+ 'iceberg.rest.oauth2.credential' = 'root:secret123',
+ 'iceberg.rest.oauth2.server-uri' =
'http://${externalEnvIp}:${polaris_port}/api/catalog/v1/oauth/tokens',
+ 'iceberg.rest.oauth2.scope' = 'PRINCIPAL_ROLE:ALL',
+ 'warehouse' = 'doris_test',
+ 's3.access_key' = 'wrong_key',
+ 's3.secret_key' = 'wrong_secret',
+ 's3.endpoint' = 'http://${externalEnvIp}:${minio_port}',
+ 's3.region' = 'us-east-1',
+ 'test_connection' = 'true'
+ );
+ """
+ exception "connectivity test failed"
+ }
+
+ // ========== Test Successful Connectivity ==========
+ // Test with valid credentials
+ def test_success_catalog = "test_connectivity_success"
+ sql """DROP CATALOG IF EXISTS ${test_success_catalog}"""
+
+ sql """
+ CREATE CATALOG ${test_success_catalog} PROPERTIES (
+ 'type'='iceberg',
+ 'iceberg.catalog.type'='rest',
+ 'iceberg.rest.uri' =
'http://${externalEnvIp}:${polaris_port}/api/catalog',
+ 'iceberg.rest.security.type' = 'oauth2',
+ 'iceberg.rest.oauth2.credential' = 'root:secret123',
+ 'iceberg.rest.oauth2.server-uri' =
'http://${externalEnvIp}:${polaris_port}/api/catalog/v1/oauth/tokens',
+ 'iceberg.rest.oauth2.scope' = 'PRINCIPAL_ROLE:ALL',
+ 'warehouse' = 'doris_test',
+ 's3.access_key' = 'admin',
+ 's3.secret_key' = 'password',
+ 's3.endpoint' = 'http://${externalEnvIp}:${minio_port}',
+ 's3.region' = 'us-east-1',
+ 'test_connection' = 'true'
+ );
+ """
+
+ logger.info("Connectivity test passed successfully")
+
+ // Cleanup
+ sql """DROP CATALOG IF EXISTS ${test_success_catalog}"""
+}
diff --git
a/regression-test/suites/external_table_p2/test_connection/test_connectivity.groovy
b/regression-test/suites/external_table_p2/test_connection/test_connectivity.groovy
new file mode 100644
index 00000000000..2a9c6f8467e
--- /dev/null
+++
b/regression-test/suites/external_table_p2/test_connection/test_connectivity.groovy
@@ -0,0 +1,207 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements. See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership. The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License. You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied. See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+suite("test_connectivity",
"p2,external,hive,iceberg,external_docker,external_docker_hive,new_catalog_property")
{
+
+ String externalEnvIp = context.config.otherConfigs.get("externalEnvIp")
+
+ // ========== S3 Configuration ==========
+ String s3_ak = context.config.otherConfigs.get("AWSAK")
+ String s3_sk = context.config.otherConfigs.get("AWSSK")
+ String s3_endpoint = context.config.otherConfigs.get("AWSEndpoint")
+ String s3_region = context.config.otherConfigs.get("AWSRegion")
+ String s3_bucket = "selectdb-qa-datalake-test-hk"
+
+ String s3_storage_properties = """
+ 's3.access_key' = '${s3_ak}',
+ 's3.secret_key' = '${s3_sk}',
+ 's3.endpoint' = 'http://${s3_endpoint}',
+ 's3.region' = '${s3_region}'
+ """
+
+ // ========== HMS Configuration ==========
+ String hms_port = context.config.otherConfigs.get("hms_port") ?: "9083"
+ String hms_uri = "thrift://${externalEnvIp}:${hms_port}"
+
+ // ========== Iceberg REST Configuration ==========
+ String rest_port =
context.config.otherConfigs.get("iceberg_rest_uri_port_s3") ?: "8181"
+ String rest_uri = "http://${externalEnvIp}:${rest_port}"
+
+ // ========== Test 1: Hive HMS Connectivity ==========
+ String enabledHive =
context.config.otherConfigs.get("enableExternalHiveTest")
+ if (enabledHive != null && enabledHive.equalsIgnoreCase("true")) {
+ logger.info("========== Testing Hive HMS Connectivity ==========")
+
+ // Test 1.1: HMS connectivity failure (invalid HMS URI)
+ def hive_meta_fail_catalog = "test_hive_meta_fail"
+ sql """DROP CATALOG IF EXISTS ${hive_meta_fail_catalog}"""
+
+ test {
+ sql """
+ CREATE CATALOG ${hive_meta_fail_catalog} PROPERTIES (
+ 'type' = 'hms',
+ 'hive.metastore.uris' = 'thrift://${externalEnvIp}:9999',
+ ${s3_storage_properties},
+ 'test_connection' = 'true'
+ );
+ """
+ exception "connectivity test failed"
+ exception "HMS"
+ }
+
+ // Test 1.2: Successful Hive HMS connectivity (S3 is not tested for
Hive HMS without warehouse)
+ def hive_success_catalog = "test_hive_success"
+ sql """DROP CATALOG IF EXISTS ${hive_success_catalog}"""
+
+ sql """
+ CREATE CATALOG ${hive_success_catalog} PROPERTIES (
+ 'type' = 'hms',
+ 'hive.metastore.uris' = '${hms_uri}',
+ ${s3_storage_properties},
+ 'test_connection' = 'true'
+ );
+ """
+ logger.info("Hive HMS connectivity test passed successfully")
+ sql """DROP CATALOG IF EXISTS ${hive_success_catalog}"""
+ }
+
+ // ========== Test 2 & 3: Iceberg Connectivity ==========
+ String enabledIceberg =
context.config.otherConfigs.get("enableExternalIcebergTest")
+ if (enabledIceberg != null && enabledIceberg.equalsIgnoreCase("true")) {
+ // ========== Test 2: Iceberg HMS + S3 Connectivity ==========
+ logger.info("========== Testing Iceberg HMS + S3 Connectivity
==========")
+
+ // Test 2.1: HMS connectivity failure (invalid HMS URI)
+ def iceberg_hms_meta_fail_catalog = "test_iceberg_hms_meta_fail"
+ sql """DROP CATALOG IF EXISTS ${iceberg_hms_meta_fail_catalog}"""
+
+ test {
+ sql """
+ CREATE CATALOG ${iceberg_hms_meta_fail_catalog} PROPERTIES (
+ 'type' = 'iceberg',
+ 'iceberg.catalog.type' = 'hms',
+ 'hive.metastore.uris' = 'thrift://${externalEnvIp}:9999',
+ 'warehouse' = 's3a://${s3_bucket}/iceberg_hms_warehouse',
+ ${s3_storage_properties},
+ 'test_connection' = 'true'
+ );
+ """
+ exception "connectivity test failed"
+ exception "HMS"
+ }
+
+ // Test 2.2: S3 storage connectivity failure (invalid S3 credentials)
+ def iceberg_hms_storage_fail_catalog = "test_iceberg_hms_storage_fail"
+ sql """DROP CATALOG IF EXISTS ${iceberg_hms_storage_fail_catalog}"""
+
+ test {
+ sql """
+ CREATE CATALOG ${iceberg_hms_storage_fail_catalog} PROPERTIES (
+ 'type' = 'iceberg',
+ 'iceberg.catalog.type' = 'hms',
+ 'hive.metastore.uris' = '${hms_uri}',
+ 'warehouse' = 's3a://${s3_bucket}/iceberg_hms_warehouse',
+ 's3.access_key' = 'wrong_key',
+ 's3.secret_key' = 'wrong_secret',
+ 's3.endpoint' = 'http://${s3_endpoint}',
+ 's3.region' = '${s3_region}',
+ 'test_connection' = 'true'
+ );
+ """
+ exception "connectivity test failed"
+ exception "S3"
+ }
+
+ // Test 2.3: Successful Iceberg HMS + S3 connectivity
+ def iceberg_hms_success_catalog = "test_iceberg_hms_success"
+ sql """DROP CATALOG IF EXISTS ${iceberg_hms_success_catalog}"""
+
+ sql """
+ CREATE CATALOG ${iceberg_hms_success_catalog} PROPERTIES (
+ 'type' = 'iceberg',
+ 'iceberg.catalog.type' = 'hms',
+ 'hive.metastore.uris' = '${hms_uri}',
+ 'warehouse' = 's3a://${s3_bucket}/iceberg_hms_warehouse',
+ ${s3_storage_properties},
+ 'test_connection' = 'true'
+ );
+ """
+ logger.info("Iceberg HMS + S3 connectivity test passed successfully")
+ sql """DROP CATALOG IF EXISTS ${iceberg_hms_success_catalog}"""
+
+ // ========== Test 3: Iceberg REST + S3 Connectivity ==========
+ logger.info("========== Testing Iceberg REST + S3 Connectivity
==========")
+
+ // Test 3.1: REST connectivity failure (invalid REST URI)
+ def iceberg_rest_meta_fail_catalog = "test_iceberg_rest_meta_fail"
+ sql """DROP CATALOG IF EXISTS ${iceberg_rest_meta_fail_catalog}"""
+
+ test {
+ sql """
+ CREATE CATALOG ${iceberg_rest_meta_fail_catalog} PROPERTIES (
+ 'type' = 'iceberg',
+ 'iceberg.catalog.type' = 'rest',
+ 'iceberg.rest.uri' =
'http://${externalEnvIp}:9999/invalid',
+ 'warehouse' = 's3a://${s3_bucket}/iceberg_rest_warehouse',
+ ${s3_storage_properties},
+ 'test_connection' = 'true'
+ );
+ """
+ exception "connectivity test failed"
+ exception "Iceberg REST"
+ }
+
+ // Test 3.2: S3 storage connectivity failure (invalid S3 credentials)
+ def iceberg_rest_storage_fail_catalog =
"test_iceberg_rest_storage_fail"
+ sql """DROP CATALOG IF EXISTS ${iceberg_rest_storage_fail_catalog}"""
+
+ test {
+ sql """
+ CREATE CATALOG ${iceberg_rest_storage_fail_catalog} PROPERTIES
(
+ 'type' = 'iceberg',
+ 'iceberg.catalog.type' = 'rest',
+ 'iceberg.rest.uri' = '${rest_uri}',
+ 'warehouse' = 's3a://${s3_bucket}/iceberg_rest_warehouse',
+ 's3.access_key' = 'wrong_key',
+ 's3.secret_key' = 'wrong_secret',
+ 's3.endpoint' = 'http://${s3_endpoint}',
+ 's3.region' = '${s3_region}',
+ 'test_connection' = 'true'
+ );
+ """
+ exception "connectivity test failed"
+ exception "S3"
+ }
+
+ // Test 3.3: Successful Iceberg REST + S3 connectivity
+ def iceberg_rest_success_catalog = "test_iceberg_rest_success"
+ sql """DROP CATALOG IF EXISTS ${iceberg_rest_success_catalog}"""
+
+ sql """
+ CREATE CATALOG ${iceberg_rest_success_catalog} PROPERTIES (
+ 'type' = 'iceberg',
+ 'iceberg.catalog.type' = 'rest',
+ 'iceberg.rest.uri' = '${rest_uri}',
+ 'warehouse' = 's3a://${s3_bucket}/iceberg_rest_warehouse',
+ ${s3_storage_properties},
+ 'test_connection' = 'true'
+ );
+ """
+ logger.info("Iceberg REST + S3 connectivity test passed successfully")
+ sql """DROP CATALOG IF EXISTS ${iceberg_rest_success_catalog}"""
+ }
+}
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]