amogh-jahagirdar commented on code in PR #6651:
URL: https://github.com/apache/iceberg/pull/6651#discussion_r1098097479
##########
spark/v3.3/spark/src/main/java/org/apache/iceberg/spark/source/SparkTable.java:
##########
@@ -247,9 +247,6 @@ public ScanBuilder newScanBuilder(CaseInsensitiveStringMap
options) {
@Override
public WriteBuilder newWriteBuilder(LogicalWriteInfo info) {
- Preconditions.checkArgument(
- snapshotId == null, "Cannot write to table at a specific snapshot:
%s", snapshotId);
Review Comment:
@namrathamyske Yeah just updated to use the name `write-branch` and tests
are passing. The issue is the name 'branch' is used for both read and write
options, and when the loadTable is performed when doing the write , it treats
it as a time travel. we should disambiguate the two. I think we should actually
call it something else for the write case. `write-branch` kinda sounds odd to
me tbh, maybe we go with `toBranch`. toBranch would be consistent with what's
at the API and what's being done in the [Flink
PR](https://github.com/apache/iceberg/pull/6660)
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]