https://gcc.gnu.org/bugzilla/show_bug.cgi?id=114563

Richard Biener <rguenth at gcc dot gnu.org> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
                 CC|                            |jakub at gcc dot gnu.org

--- Comment #2 from Richard Biener <rguenth at gcc dot gnu.org> ---
Note this is likely because of release_pages keeping a large freelist when
using madvise.  After r14-9767 this improved to

   5.15%         35482  cc1plus  cc1plus           [.] ggc_internal_alloc

I've tried a quick hack to use the 'prev' field to implement a skip-list,
skipping to the next page entry with a different size.  That works
reasonably well but it also shows the freelist is heavily fragmented.

N: M P Q
100001: 98767 19662 17321
200001: 176918 68336 27167
300001: 228676 164683 27185

that's stats after N alloc_page which M times finds a free page to re-use,
in that process P times using the skip-list to skip at least one entry and
Q times following the ->next link directly.

It does get alloc_page from the profile.

It might be worth keeping the list sorted in ascending ->bytes order with
this, making pagesize allocations O(1) and other sizes O(constant).

Of course using N buckets would be the straight-forward thing but then
release_pages would be complicated, esp. malloc page groups I guess.

But as said, this is likely a symptom of the MADVISE path keeping too many
page entries for the testcase, so another attack vector is to more
aggressively release them.  I don't know how much fragmented they are,
we don't seem to try sorting them before unmapping the >= free_unit chunks.

Reply via email to