mgmarino commented on code in PR #12868:
URL: https://github.com/apache/iceberg/pull/12868#discussion_r2079911493


##########
spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/source/SerializableTableWithSize.java:
##########
@@ -33,8 +33,9 @@
  *
  * <p>This class also implements AutoCloseable to avoid leaking resources upon 
broadcasting.
  * Broadcast variables are destroyed and cleaned up on the driver and 
executors once they are
- * garbage collected on the driver. The implementation ensures only resources 
used by copies of the
- * main table are released.
+ * garbage collected on the driver. The implementation should avoid closing 
deserialized copies of
+ * shared resources like FileIO, as they may use a shared connection pool. 
Shutting down the pool

Review Comment:
   I think that's generally fine (each instance uses it's own dedicated 
client), but I rather think the issue is something like the following (sorry, 
it's been a while since I looked into this):
   
   - SerializableTable is broadcast to the executor
   - The IO object is used from the serializable table.
   - At some point, Spark requests the serializable table to be persisted to 
disk/block released. During this process the IO is closed.
   - The IO object is still being used elsewhere in the executor and, when it 
is used *again*, this results in the failure.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org

Reply via email to