ismailsimsek commented on code in PR #11906: URL: https://github.com/apache/iceberg/pull/11906#discussion_r1903139413
########## spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/actions/TestRemoveOrphanFilesAction.java: ########## @@ -610,9 +613,12 @@ public void testHiddenPathsStartingWithPartitionNamesAreIgnored() throws IOExcep waitUntilAfter(System.currentTimeMillis()); SparkActions actions = SparkActions.get(); + DeleteOrphanFilesSparkAction action = + actions.deleteOrphanFiles(table).olderThan(System.currentTimeMillis()); + // test list methods by directly instantiating the action + assertThatDatasetsAreEqualIgnoringOrder(action.listWithPrefix(), action.listWithoutPrefix()); Review Comment: this test failing and probably `PartitionAwareHiddenPathFilter` needs adjustment. new method is returning `data/_c2_trunc/file.txt"` file where old one is not. ########## spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/actions/DeleteOrphanFilesSparkAction.java: ########## @@ -292,14 +293,37 @@ private Dataset<FileURI> validFileIdentDS() { private Dataset<FileURI> actualFileIdentDS() { StringToFileURI toFileURI = new StringToFileURI(equalSchemes, equalAuthorities); + Dataset<String> dataList; if (compareToFileList == null) { - return toFileURI.apply(listedFileDS()); + dataList = + table.io() instanceof SupportsPrefixOperations ? listWithPrefix() : listWithoutPrefix(); } else { - return toFileURI.apply(filteredCompareToFileList()); + dataList = filteredCompareToFileList(); } + + return toFileURI.apply(dataList); + } + + @VisibleForTesting + Dataset<String> listWithPrefix() { + List<String> matchingFiles = Lists.newArrayList(); + PathFilter pathFilter = PartitionAwareHiddenPathFilter.forSpecs(table.specs()); + + Iterator<org.apache.iceberg.io.FileInfo> iterator = + ((SupportsPrefixOperations) table.io()).listPrefix(location).iterator(); + while (iterator.hasNext()) { + org.apache.iceberg.io.FileInfo fileInfo = iterator.next(); + if (fileInfo.createdAtMillis() < olderThanTimestamp + && pathFilter.accept(new Path(fileInfo.location()))) { + matchingFiles.add(fileInfo.location()); + } + } + JavaRDD<String> matchingFileRDD = sparkContext().parallelize(matchingFiles, 1); + return spark().createDataset(matchingFileRDD.rdd(), Encoders.STRING()); } - private Dataset<String> listedFileDS() { + @VisibleForTesting + Dataset<String> listWithoutPrefix() { Review Comment: had to make it package private in-orderto test using `VisibleForTesting`. not sure how else to call `listWithoutPrefix()` method. By default: Current executions are using `HadoopFileIO` and its implementing `DelegateFileIO` -> `SupportPrefixOperations` which calls `listWithPrefix` [mentioned in this comment too](https://github.com/apache/iceberg/pull/7914#issuecomment-2212775830) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org For additional commands, e-mail: issues-h...@iceberg.apache.org