amogh-jahagirdar commented on code in PR #13555:
URL: https://github.com/apache/iceberg/pull/13555#discussion_r2217975460


##########
spark/v4.0/spark/src/main/java/org/apache/iceberg/spark/source/SparkWriteBuilder.java:
##########
@@ -120,12 +120,15 @@ public WriteBuilder overwrite(Filter[] filters) {
   @Override
   public Write build() {
     // The write schema should only include row lineage in the output if it's 
an overwrite
-    // operation.
+    // operation or if it's a compaction.
     // In any other case, only null row IDs and sequence numbers would be 
produced which
     // means the row lineage columns can be excluded from the output files
-    boolean writeIncludesRowLineage = TableUtil.supportsRowLineage(table) && 
overwriteFiles;
+    boolean writeIncludesRowLineage =
+        TableUtil.supportsRowLineage(table)
+            && (overwriteFiles || writeConf.rewrittenFileSetId() != null);
     StructType sparkWriteSchema = dsSchema;
-    if (writeIncludesRowLineage) {
+    if (writeIncludesRowLineage
+        && !dsSchema.exists(field -> 
field.name().equals(MetadataColumns.ROW_ID.name()))) {

Review Comment:
   Since we're hijacking the Spark table schema to include the row lineage 
fields, the output schema Spark hands back to the writer will naturally have 
the row lineage fields as well.  The check is required to prevent adding to the 
spark output schema twice. I separated this into two booleans 
`writeRequiresRowLineage` and `writeAlreadyIncludesRowLineage` for more clarity



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org

Reply via email to