On Fri, Oct 21, 2011 at 07:52:50AM +0200, Andi Kleen wrote:
> @@ -776,13 +778,18 @@ alloc_page (unsigned order)
>extras on the freelist. (Can only do this optimization with
>mmap for backing store.) */
>struct page_entry *e, *f = G.free_pages;
> - int i;
> + int
From: Andi Kleen
There were some concerns that the earlier munmap patch could lead
to address space being freed that cannot be allocated again
by ggc due to fragmentation. This patch adds a fragmentation
fallback to solve this: when a GGC_QUIRE_SIZE sized allocation fails,
try again with a page s