amogh-jahagirdar commented on code in PR #6651: URL: https://github.com/apache/iceberg/pull/6651#discussion_r1098060983
########## spark/v3.3/spark/src/main/java/org/apache/iceberg/spark/source/SparkTable.java: ########## @@ -247,9 +247,6 @@ public ScanBuilder newScanBuilder(CaseInsensitiveStringMap options) { @Override public WriteBuilder newWriteBuilder(LogicalWriteInfo info) { - Preconditions.checkArgument( - snapshotId == null, "Cannot write to table at a specific snapshot: %s", snapshotId); Review Comment: I see now. I think this goes back to @rdblue point here https://github.com/apache/iceberg/pull/6651#discussion_r1085898935. So instead of passing in the snapshot through the constructor we should just directly pass in all the options. The snapshot ID can be resolved in the scan itself. That seems cleaner and I think will get us out of the removals we do here https://github.com/apache/iceberg/pull/6717/files#diff-d278772fd3dc1431367d81a075a79404d9e1acff28fab611ad4e3d1343133596R357. Does that make sense? I think we can discuss more on https://github.com/apache/iceberg/pull/6717/files# since 1.) We should get in #6717 first since that's an important fix. 2.) In #6717 we can handle all this refactorings and then rebase this PR to unblock -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org For additional commands, e-mail: issues-h...@iceberg.apache.org