steveloughran commented on PR #10233: URL: https://github.com/apache/iceberg/pull/10233#issuecomment-2590997379
For anyone watching this, there's a full integration test suite in the hadoop test code: https://github.com/apache/hadoop/pull/7285 All is good, though as it's the first java17 code and depends on an iceberg jar with this patch in, it's actually dependent on this PR going in first. It does show that yes, bulk deletes through the S3A code does work, without me having to add a significant piece of test code to iceberg to wire up hadoop fileIO to the minio docker container that S3FileIO uses. It also makes for a good regression test: anything downstream of both will get neglected. Anyway, with those tests happy, I just have a few more tests to write (mixing some local files too) and then I'll be confident this is ready for review -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org For additional commands, e-mail: issues-h...@iceberg.apache.org