maomaodev commented on issue #12790:
URL: https://github.com/apache/iceberg/issues/12790#issuecomment-2809367143
> I believe our issue is we were attempting to not change Iceberg behavior
which was originally to default to not purging. We have to use icebergCatalog
dropTable since our logic for dropping a table is different than the OSS code
path. For example the Spark Session Catalog doesn't actually know where ll
Iceberg data files are.
Shouldn't we make a clear distinction between the two situations, just like
`alterTable` and `purgeTable`? For Iceberg tables, we use
`icebergCatalog.dropTable(ident)` to maintain consistent Iceberg behavior. For
non-Iceberg tables like Hive, we use `getSessionCatalog().dropTable(ident)` to
maintain consistent Spark behavior. For example,
```
if (icebergCatalog.tableExists(ident)) {
return icebergCatalog.dropTable(ident);
} else {
return getSessionCatalog().dropTable(ident);
}
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]