amogh-jahagirdar commented on code in PR #13061: URL: https://github.com/apache/iceberg/pull/13061#discussion_r2103007012
########## spark/v3.5/spark-extensions/src/test/java/org/apache/iceberg/spark/extensions/TestRowLevelOperationsWithLineage.java: ########## @@ -81,6 +87,40 @@ record -> createRecord(SCHEMA, 103, "d", 3L, 1L), createRecord(SCHEMA, 104, "e", 4L, 1L)); + @Parameters( + name = + "catalogName = {0}, implementation = {1}, config = {2}," + + " format = {3}, vectorized = {4}, distributionMode = {5}," + + " fanout = {6}, branch = {7}, planningMode = {8}, formatVersion = {9}") + public static Object[][] parameters() { + return new Object[][] { + { + "testhadoop", + SparkCatalog.class.getName(), + ImmutableMap.of("type", "hadoop"), + FileFormat.PARQUET, + false, + WRITE_DISTRIBUTION_MODE_HASH, + true, + null, + LOCAL, Review Comment: Fair point, I mostly just went and used these since it inherits from the existing DML test class but I think we can maybe slim down some of the parameters in this class by making some values constant? There are ones which we'll need to test against which include file format and vectorized because those validate that the readers are plumbing inheritance correctly. Probably doesn't add value to test row lineage against a single branch but we should have a test which does writes to main and additional writes to another branch to make sure ID assignment/seq number is still correct. Let me see if I can slim these down as part of this change -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org For additional commands, e-mail: issues-h...@iceberg.apache.org