steveloughran commented on PR #15501:
URL: https://github.com/apache/iceberg/pull/15501#issuecomment-4033451565

   Noticed this; I will test some fault injection through #15436 
   
   SparkCleanupUtil.deleteFiles() is still handing off to its own 
implementation of concurrent delete if the FileIO implementation doesn't do 
bulk delete. Why not have it always pass down and rely on the concurrency in 
CatalogUtil?
   
   Is it that just because of the backoff/retry logic in SparkCleanupUtil? 
   
   FWIW I don't see why SparkCleanupUtil delete() needs recovery except to 
ensure transient network problems don't stop delete() from doing a best effort 
attempt at cleanup. But that stuff can and is generally hidden in the client 
code underneath.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to