amogh-jahagirdar commented on code in PR #6651:
URL: https://github.com/apache/iceberg/pull/6651#discussion_r1098060983
##########
spark/v3.3/spark/src/main/java/org/apache/iceberg/spark/source/SparkTable.java:
##########
@@ -247,9 +247,6 @@ public ScanBuilder newScanBuilder(CaseInsensitiveStringMap
options) {
@Override
public WriteBuilder newWriteBuilder(LogicalWriteInfo info) {
- Preconditions.checkArgument(
- snapshotId == null, "Cannot write to table at a specific snapshot:
%s", snapshotId);
Review Comment:
I see now. I think this goes back to @rdblue point here
https://github.com/apache/iceberg/pull/6651#discussion_r1085898935.
So instead of passing in the snapshot through the constructor we could just
directly pass in all the options. The snapshot ID can be resolved in the scan
itself. That seems cleaner and I think will get us out of the removals we do
here
https://github.com/apache/iceberg/pull/6717/files#diff-d278772fd3dc1431367d81a075a79404d9e1acff28fab611ad4e3d1343133596R357.
Seems the constructor for SparkTable with snapshot ID was just created for
the purpose of selecting the right schema but I think after the refactoring it
won't even be needed.
Does that make sense? I think we can discuss more on #6717 since it pertains
to that. I also think
1.) We should get in #6717 first since that's an important fix.
2.) In #6717 we can handle all the refactoring that I mentioned above and
then we can rebase this PR to get unblocked.
CC: @rdblue , @aokolnychyi
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]