iting0321 commented on code in PR #3820:
URL: https://github.com/apache/polaris/pull/3820#discussion_r2876958625


##########
plugins/spark/v3.5/spark/src/main/java/org/apache/polaris/spark/SparkCatalog.java:
##########
@@ -186,30 +211,56 @@ public Table alterTable(Identifier ident, TableChange... 
changes) throws NoSuchT
     try {
       return this.icebergsSparkCatalog.alterTable(ident, changes);
     } catch (NoSuchTableException e) {
-      Table table = this.polarisSparkCatalog.loadTable(ident);
-      String provider = 
table.properties().get(PolarisCatalogUtils.TABLE_PROVIDER_KEY);
-      if (PolarisCatalogUtils.useDelta(provider)) {
-        // For delta table, most of the alter operations is a delta log 
manipulation,
-        // we load the delta catalog to help handling the alter table 
operation.
-        // NOTE: This currently doesn't work for changing file location and 
file format
-        //     using ALTER TABLE ...SET LOCATION, and ALTER TABLE ... SET 
FILEFORMAT.
-        TableCatalog deltaCatalog = 
deltaHelper.loadDeltaCatalog(this.polarisSparkCatalog);
-        return deltaCatalog.alterTable(ident, changes);
-      } else if (PolarisCatalogUtils.useHudi(provider)) {
-        TableCatalog hudiCatalog = 
hudiHelper.loadHudiCatalog(this.polarisSparkCatalog);
-        return hudiCatalog.alterTable(ident, changes);
-      } else if (PolarisCatalogUtils.usePaimon(provider)) {
-        TableCatalog paimonCatalog = 
paimonHelper.loadPaimonCatalog(this.polarisSparkCatalog);
-        return paimonCatalog.alterTable(ident, changes);
-      } else {
-        return this.polarisSparkCatalog.alterTable(ident);
+      // Try to load from Polaris first
+      try {
+        Table table = this.polarisSparkCatalog.loadTable(ident);
+        String provider = 
table.properties().get(PolarisCatalogUtils.TABLE_PROVIDER_KEY);
+        if (PolarisCatalogUtils.useDelta(provider)) {
+          // For delta table, most of the alter operations is a delta log 
manipulation,
+          // we load the delta catalog to help handling the alter table 
operation.
+          // NOTE: This currently doesn't work for changing file location and 
file format
+          //     using ALTER TABLE ...SET LOCATION, and ALTER TABLE ... SET 
FILEFORMAT.
+          TableCatalog deltaCatalog = 
deltaHelper.loadDeltaCatalog(this.polarisSparkCatalog);
+          return deltaCatalog.alterTable(ident, changes);
+        } else if (PolarisCatalogUtils.useHudi(provider)) {
+          TableCatalog hudiCatalog = 
hudiHelper.loadHudiCatalog(this.polarisSparkCatalog);
+          return hudiCatalog.alterTable(ident, changes);
+        } else if (PolarisCatalogUtils.usePaimon(provider)) {
+          TableCatalog paimonCatalog = 
paimonHelper.loadPaimonCatalog(this.catalogName);
+          return paimonCatalog.alterTable(ident, changes);
+        } else {
+          return this.polarisSparkCatalog.alterTable(ident);
+        }
+      } catch (NoSuchTableException polarisException) {
+        // Table not found in Polaris, try Paimon directly
+        // Paimon tables created via Paimon's SparkCatalog may not be 
registered in Polaris
+        try {
+          TableCatalog paimonCatalog = 
paimonHelper.loadPaimonCatalog(this.catalogName);
+          return paimonCatalog.alterTable(ident, changes);
+        } catch (Exception paimonException) {
+          // Paimon catalog not configured or table not found there either
+          throw polarisException;
+        }
       }
     }
   }
 
   @Override
   public boolean dropTable(Identifier ident) {
-    return this.icebergsSparkCatalog.dropTable(ident) || 
this.polarisSparkCatalog.dropTable(ident);
+    boolean dropped =
+        this.icebergsSparkCatalog.dropTable(ident) || 
this.polarisSparkCatalog.dropTable(ident);
+
+    // Also try to drop from Paimon catalog if it exists

Review Comment:
   1. I’ve refactored `dropTable` to resolve the Paimon catalog via the 
provider first, consistent with how load works.
   2. Added a validation to ensure the table belongs to the Paimon catalog 
before deleting, to avoid accidentally dropping unrelated tables with the same 
identifier.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to