amogh-jahagirdar commented on code in PR #12736:
URL: https://github.com/apache/iceberg/pull/12736#discussion_r2048073422


##########
spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/source/SparkWriteBuilder.java:
##########
@@ -117,7 +119,10 @@ public WriteBuilder overwrite(Filter[] filters) {
   @Override
   public Write build() {
     // Validate
-    Schema writeSchema = validateOrMergeWriteSchema(table, dsSchema, 
writeConf);
+    Schema writeSchema =
+        validateOrMergeWriteSchema(
+            table, dsSchema, writeConf, TableUtil.formatVersion(table) >= 3 && 
overwriteFiles);

Review Comment:
   I'll Add a comment for this, but in the context of building a regular 
`SparkWrite`,  the only one case where we need to include row lineage 
explicitly in the output schema is the `overwriteFiles` case for CoW. 
   
   In every other case for `SparkWrite`, both fields will be explicitly null 
for all the fields produced, and as per the spec that means we do not need to 
write them out. This simplifies a bit because otherwise we'd need to explicitly 
handle Append cases in the rules to update the DSV2 output but that's not 
strictly neccessary. cc @rdblue 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org

Reply via email to