RussellSpitzer commented on issue #12062:
URL: https://github.com/apache/iceberg/issues/12062#issuecomment-2609962841

   The difficulty with this is In Spark is that spark will always call "load" 
before drop.
   
   So essentially DROP in Spark looks like
   
   Load Catalog
   Load Table
   Call Table.Drop
   
   This makes it difficult to implement a "force drop" because the call to 
Table.Drop happens after we have the error at LoadTable. We've had several 
threads and Issues about this previously, maybe the best thing to do is add a 
Java Utillity method which just calls the underlying Catalog Drop Function on 
the Spark Table isntance without going through the load pathway?
   
   For Hive catalogs in Spark we can already work around the Spark interface by 
calling Drop directly on the hive catalog,
   You can do this with a 
   
   ```
   spark.sharedState.externalCatalog.dropTable("db", "table", false, false)
   ```
   
   Which uses's Spark's hive client to call drop directly on the catalog 
without going through the Spark machinery.
   
   If you have a better solution to fixing SparkCatalog.java please let me know.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org

Reply via email to