namrathamyske commented on code in PR #6651: URL: https://github.com/apache/iceberg/pull/6651#discussion_r1098126130
########## spark/v3.3/spark/src/main/java/org/apache/iceberg/spark/source/SparkTable.java: ########## @@ -247,9 +247,6 @@ public ScanBuilder newScanBuilder(CaseInsensitiveStringMap options) { @Override public WriteBuilder newWriteBuilder(LogicalWriteInfo info) { - Preconditions.checkArgument( - snapshotId == null, "Cannot write to table at a specific snapshot: %s", snapshotId); Review Comment: But i think we can't disregard calling loadTable wrt to ref passed. Later in future when we implement session configs for testing `INSERT` `DELETE` operations, there is lot of overlap between read and write. Spark logical plans call the `SparkScanBuilder` https://github.com/apache/iceberg/blob/32a8ef52ddf20aa2068dfff8f9e73bd5d27ef610/spark/v3.3/spark/src/main/java/org/apache/iceberg/spark/source/SparkScanBuilder.java#L260, https://github.com/apache/iceberg/blob/32a8ef52ddf20aa2068dfff8f9e73bd5d27ef610/spark/v3.3/spark/src/main/java/org/apache/iceberg/spark/source/SparkScanBuilder.java#L424, https://github.com/apache/iceberg/blob/32a8ef52ddf20aa2068dfff8f9e73bd5d27ef610/spark/v3.3/spark/src/main/java/org/apache/iceberg/spark/source/SparkScanBuilder.java#L393. which should use the time travel config. SparkCopyOnWrite, SparkMergeOnRead have respective scanner for each of them which inherit from SparkScanBuilder. I will include the changes in this PR. Its still WIP. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org For additional commands, e-mail: issues-h...@iceberg.apache.org