RussellSpitzer commented on code in PR #9187:
URL: https://github.com/apache/iceberg/pull/9187#discussion_r1417524087


##########
spark/v3.4/spark-extensions/src/test/java/org/apache/iceberg/spark/extensions/TestAddFilesProcedure.java:
##########
@@ -77,7 +77,7 @@ public void setupTempDirs() {
 
   @After
   public void dropTables() {
-    sql("DROP TABLE IF EXISTS %s", sourceTableName);
+    sql("DROP TABLE IF EXISTS %s PURGE", sourceTableName);

Review Comment:
   Yes so the issue here is that non-iceberg tables are being created inorder 
to test conversion and addition of files from non-iceberg tables into iceberg 
tables. The Spark Session catalog is probably now (correctly) treating them as 
external tables (not hive managed tables) and only dropping  metadata instead 
of clearing out the whole table. 
   
   This is solely a Spark side change and has no impacts on any Iceberg code 
that I know of.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org

Reply via email to