amogh-jahagirdar commented on code in PR #9255:
URL: https://github.com/apache/iceberg/pull/9255#discussion_r1428890835


##########
spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/source/SparkWrite.java:
##########
@@ -673,11 +673,11 @@ public DataWriter<InternalRow> createWriter(int 
partitionId, long taskId, long e
       Table table = tableBroadcast.value();
       PartitionSpec spec = table.specs().get(outputSpecId);
       FileIO io = table.io();
-
+      String operationId = queryId + "-" + epochId;

Review Comment:
   I still wasn't able to repro this via a unit test so that's why I'm not 100% 
this really fixes the problem. It's just based on some behaviors that were 
reported by the original bug reporters.
   
   For ETA, there are no guarantees because it depends on if we as a community 
conclude if this really is the right fix but I am aiming to get this in the 1.5 
release ( maybe the 1.4.3 release since quite a few users have reported this 
issue and it does lead to a corruption of their data). I'll spend a bit more 
time digging in on this, and bring this up on the mailing list thread for 1.4.3



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org

Reply via email to