rahij opened a new issue, #357:
URL: https://github.com/apache/iceberg-python/issues/357

   ### Feature Request / Improvement
   
   I am trying to understand how the new arrow write API can work with 
distributed writes similar to spark. I have a use case where from different 
machines, I would like to write a separate arrow dataset that all get committed 
in the same iceberg transaction. I assume this should be theoretically possible 
as it works with spark, but I was wondering if there are any plans to support 
this in the arrow write API. Thanks!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org

Reply via email to