https://gcc.gnu.org/bugzilla/show_bug.cgi?id=124551

--- Comment #1 from Jonathan Wakely <redi at gcc dot gnu.org> ---
We could move the move_nonempty_chunks() function into the ~_TPools destructor,
so that we only do one loop which moves chunks out of a pool and then releases
that pol. That would free memory as we go, making it less likely that an insert
call would run out of memory.

We could also count all the non-empty chunks first, and try to reserve exactly
the required size in the shared _TPools, instead of letting it grow by 1.5
times the current capacity. That might also make bad_alloc less likely.

I don't see a way to guarantee no bad_alloc, except to pre-allocate space in
the shared pools every time we create a new chunk in a thread-local pool. That
would ensure that we can always insert thread-local chunks into the shared
pools on thread exit, but would increase memory overhead and make replenishing
a pool slower.

The right fix might be to allow the shared pools to maintain a linked list
where we can transfer ownership of the non-empty chunks, without needing to
allocate more memory. We could keep the fast vector of chunks for the common
case, and fall back to a non-allocating list when we can't grow the vector.

Reply via email to