Pear0 opened a new issue, #47861:
URL: https://github.com/apache/arrow/issues/47861
### Describe the bug, including details regarding any error messages,
version, and platform.
While arrow is designed for tall tables, sometimes I end up with a really
wide table and the to_pandas() code seems to use N^2 memory where N is the
number of extension array columns.
For example, the following snippet peaks at about 7GB of memory used on my
machine:
```python
import pyarrow as pa
import pandas as pd
t = pa.table({f'col_{i}': pa.array([], type=pa.int64()) for i in
range(10000)})
# with just this line, the process uses 7GB of memory at peak
t.to_pandas(types_mapper={pa.int64(): pd.ArrowDtype(pa.int64())}.get)
# with just this line, the process uses 118MB of memory at peak
t.to_pandas()
```
### Component(s)
Python
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]