danielcweeks commented on code in PR #14501:
URL: https://github.com/apache/iceberg/pull/14501#discussion_r2538789721


##########
core/src/main/java/org/apache/iceberg/hadoop/HadoopFileIO.java:
##########
@@ -211,6 +213,27 @@ private ExecutorService executorService() {
     return executorService;
   }
 
+  private void deletePath(FileSystem fs, Path toDelete, boolean recursive) 
throws IOException {
+    if (!shouldUseTrashIfConfigured()) {
+      fs.delete(toDelete, recursive);
+      return;
+    }
+    Trash trash = new Trash(fs, getConf());
+    if (!trash.isEnabled()) {

Review Comment:
   I'm not particularly familiar with the way the Hadoop Trash works, but it's 
quite odd to me.  The Trash isn't natively part of the FileSystem (like 
versioned objects are native to cloud storage).  This leaves an awkward 
situation where, if we respect the `core-site.xml` config, then we turn it on 
everywhere like you said @anuragmantri (assuming the deleter has access to the 
trash?).
   
   I could see this going both ways in that if you configure the 
`core-site.xml` and didn't realize there was a second property to set, you 
would lose data.  You also potentially have the issue where deleted data goes 
to the trash configured for whoever deleted it (so two separate deletes could 
end up in completely different locations).
   
   I'm not sure what's right here and open to suggestions.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to