cplager opened a new issue, #45516:
URL: https://github.com/apache/arrow/issues/45516

   ### Describe the bug, including details regarding any error messages, 
version, and platform.
   
   I am using pyarrow 16.1.0 and pandas 1.5.3.
   
   I have a data sample that has over 11,000 stock names.  I am trying to write 
hive partition on ["First letter of ticker", "ticker", "year"].
   
   Writing from pandas (using `to_parquet` pyarrow engine) or writing directly 
from pyarrow (`write_to_dataset`) both stop and freeze after having written the 
3897th file (out of what should be over 11,000 parquet files).  The code just 
hangs after creating the directory to which it should write the 3898th file.
   
   ### Component(s)
   
   Python


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@arrow.apache.org.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to