jimwhite opened a new issue, #44030: URL: https://github.com/apache/arrow/issues/44030
### Describe the usage question you have. Please include as many useful details as possible. I have data from a commercial vendor that is delivered in compressed CSV files (.csv.gz). The timestamps are Unix timestamps as decimal integer and come in two flavors, either millisecond (`pa.timestamp('ms', tz='UTC')`) or nanosecond (`pa.timestamp('ns', tz='UTC')`). I've learned that the CSV conversion for these integer timestamps doesn't work because strptime(3) does not have a format option for them. My workaround is to cast after reading: ```python table.set_column( table.column_names.index('window_start'), 'window_start', table.column("window_start").cast(pa.timestamp('ns', tz='UTC')) ) ``` My question is whether I'm missing something about how to do this during the `pa.csv.read_csv` (these files are large and I want/need to process them incrementally) and if not whether I should raise this an enhancement request (I've looked at many issues around timestamps and haven't found any about this kind of format). ### Component(s) Python -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@arrow.apache.org.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org