liziyan-lzy commented on code in PR #12254:
URL: https://github.com/apache/iceberg/pull/12254#discussion_r2036573158


##########
spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/actions/DeleteOrphanFilesSparkAction.java:
##########
@@ -335,7 +344,39 @@ private Dataset<String> listedFileDS() {
     return spark().createDataset(completeMatchingFileRDD.rdd(), 
Encoders.STRING());
   }
 
-  private static void listDirRecursively(
+  private static void listDirRecursivelyWithFileIO(
+      SupportsPrefixOperations io,
+      String dir,
+      Predicate<org.apache.iceberg.io.FileInfo> predicate,
+      PathFilter pathFilter,
+      List<String> matchingFiles) {
+    String listPath = dir;
+    if (!dir.endsWith("/")) {
+      listPath = dir + "/";
+    }
+    Iterable<org.apache.iceberg.io.FileInfo> files = io.listPrefix(listPath);

Review Comment:
   I agree with you, this is very important. The current new listing method 
lost the capability for distributed listing, which can lead to performance 
regression when dealing with large numbers of files. I've been considering this 
issue as well.
   
   Perhaps this implementation should not be the default method, or we could 
introduce a strategy approach, making the new listing method optional. This 
would also provide a solution for the issue mentioned in 
https://github.com/apache/iceberg/issues/11541. What do you think?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org

Reply via email to