amogh-jahagirdar commented on code in PR #6651:
URL: https://github.com/apache/iceberg/pull/6651#discussion_r1098060983
##########
spark/v3.3/spark/src/main/java/org/apache/iceberg/spark/source/SparkTable.java:
##########
@@ -247,9 +247,6 @@ public ScanBuilder newScanBuilder(CaseInsensitiveStringMap
options) {
@Override
public WriteBuilder newWriteBuilder(LogicalWriteInfo info) {
- Preconditions.checkArgument(
- snapshotId == null, "Cannot write to table at a specific snapshot:
%s", snapshotId);
Review Comment:
I see now. I think let's get in #6717 first since that's an important fix. I
think for writing we can check the write options , if a branch is specified in
the SparkWriteOptions then it can bypass the check. But it needs to validate
that it's specifically a branch. Or if what @rdblue meant by
https://github.com/apache/iceberg/pull/6651#discussion_r1085898935 is we
shouldn't set the snapshot ID in the first place for time travel, this will
probably require some refactoring which we can take on #6717
CC: @rdblue , @aokolnychyi
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]