. Thanks for review!
>
Only that the return value of frame_vector_to_pages() returns int where
as the potential range that is converted is unsigned int. I don't think
there are any mistakes dealing with signed/unsigned but I don't see any
advantage of using unsigned either and limiting i
even though we potentially leaked now.
> + for (i = 0; i < vec->nr_frames; i++)
> + put_page(pages[i]);
> + vec->got_ref = 0;
> +out:
> + vec->nr_frames = 0;
> +}
> +EXPORT_SYMBOL(put_vaddr_frames);
> +
> +/**
> + * frame_vector_to_pages - convert frame vector to contain page pointers
> + * @vec: frame vector to convert
> + *
> + * Convert @vec to contain array of page pointers. If the conversion is
> + * successful, return 0. Otherwise return an error.
> + */
If you do another version, it would not hurt to mention that a page
reference is not taken when doing this conversion.
> +int frame_vector_to_pages(struct frame_vector *vec)
> +{
I think it's probably best to make the relevant counters in frame_vector
signed and limit the maximum possible size of it. It's still not putting
any practical limit on the size of the frame_vector.
--
Mel Gorman
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
<= PAGE_SIZE)
> + pfns = kmalloc(size, GFP_KERNEL);
kmalloc supports larger sizes than this but I suspect this was
deliberate to avoid high-order allocations.
> + else
> + pfns = vmalloc(size);
> + if (!pfns)
> + return NULL;
> +
setup_zone_migrate_reserve(zone);
> spin_unlock_irqrestore(&zone->lock, flags);
> }
This is better in that it is not vunerable to parallel updates of
min_free_kbytes. It would be slightly tidier to introduce something
like cma_wmark_pages() that returns min_cma_pages if CONFIG_CMA and 0
otherwise. Use the helper to get right of this ifdef CONFIG_CMA within
setup_per_zone_wmarks().
You'll still have the problem of kswapd not taking CMA pages properly into
account when deciding whether to reclaim or not though.
--
Mel Gorman
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On Fri, Feb 03, 2012 at 01:18:43PM +0100, Marek Szyprowski wrote:
> Welcome everyone again!
>
> This is yet another quick update on Contiguous Memory Allocator patches.
> This version includes another set of code cleanups requested by Mel
> Gorman and a few minor bug fixes. I r
aim
Process a: min_free_kbytes -= 65536
min_free_kbytes now wraps negative and the machine hangs.
The damage is confined to CMA though so I am not going to lose sleep
over it but you might want to consider at least preventing parallel
updates to min_free_kbytes from proc.
--
Mel Gorman
SUSE Labs
to other
> migration types then requested.
>
> Signed-off-by: Michal Nazarewicz
> Signed-off-by: Marek Szyprowski
> Signed-off-by: Kyungmin Park
> Tested-by: Rob Clark
> Tested-by: Ohad Ben-Cohen
> Tested-by: Benjamin Gaignard
Minor comments that can be handled as a
in other places.
>
> Signed-off-by: Michal Nazarewicz
> Signed-off-by: Marek Szyprowski
Acked-by: Mel Gorman
--
Mel Gorman
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at
BLE.
>
> On Thu, 02 Feb 2012 13:47:29 +0100, Mel Gorman wrote:
> >I'm not quite seeing this. In free_hot_cold_page(), the pageblock
> >type is checked so the page private should be set to MIGRATE_CMA or
> >MIGRATE_ISOLATE for the CMA area. It's not clear how
which have incorrect migrate type on free finally
> causes pageblock migration type change from MIGRATE_CMA to MIGRATE_MOVABLE.
I'm not quite seeing this. In free_hot_cold_page(), the pageblock
type is checked so the page private should be set to MIGRATE_CMA or
MIGRATE_ISOLATE for the C
On Mon, Jan 30, 2012 at 04:41:22PM +0100, Michal Nazarewicz wrote:
> On Mon, 30 Jan 2012 12:15:22 +0100, Mel Gorman wrote:
>
> >On Thu, Jan 26, 2012 at 10:00:44AM +0100, Marek Szyprowski wrote:
> >>From: Michal Nazarewicz
> >>@@ -139,3 +139,27 @@ int test_pages_is
area,
> +is_migrate_cma(migratetype)
> + ? migratetype : start_migratetype);
>
> I'll fix this and the calls to is_pageblock_cma().
>
That makes a lot more sense. Thanks.
I have a vague recollection that there was a
gt; >>} else if (!locked)
> >>spin_lock_irq(&zone->lru_lock);
> >>
> >>- if (!pfn_valid_within(low_pfn))
> >>+ if (!pfn_valid(low_pfn))
> >> continue;
> >>
is yet another release of the Contiguous Memory
> > > Allocator patches. This version mainly includes code cleanups requested
> > > by Mel Gorman and a few minor bug fixes.
> >
> > Hi Marek,
> >
> > Thanks for keeping up this work! I really hope it works out
pages
from the wrong zones.
I won't ack this particular patch but I am not going to insist that
you fix these prior to merging either. If you leave problem 3 as it
is, I would really like to see a comment explaning the problem for
future users of CMA on other arches (if they exist).
--
Mel
irect synchronous page reclaim */
> +static inline int
> +__perform_reclaim(gfp_t gfp_mask, unsigned int order, struct zonelist
> *zonelist,
> + nodemask_t *nodemask)
This function is too large to be inlined. Make it a static int. Once
that is fixed add a
Acked-by: Mel Gorman
int mt = get_pageblock_migratetype(page);
> + if (mt != MIGRATE_ISOLATE && !is_migrate_cma(mt))
> + set_pageblock_migratetype(page,
> + MIGRATE_MOVABLE);
> +
nd marker. The letter change,
> removes the implicit MIGRATE_UNMOVABLE from the end of each row which
> was read by __rmqueue_fallback() function.
>
> Signed-off-by: Michal Nazarewicz
> Signed-off-by: Marek Szyprowski
Acked-by: Mel Gorman
--
Mel Gorman
SUSE Labs
--
To unsubscribe
if (start != outer_start)
> + free_contig_range(outer_start, start - outer_start);
> + if (end != outer_end)
> + free_contig_range(end, outer_end - end);
> +
> +done:
> + undo_isolate_page_range(pfn_align_to_maxpage_down(start),
> +
gt;
> +#if defined CONFIG_COMPACTION || defined CONFIG_CMA
> +
This is pedantic but you reference CONFIG_CMA before the patch that
declares it. The only time this really matters is when it breaks
bisection but I do not think that is the case here.
Whether you fix this or not by movi
On Mon, Jan 30, 2012 at 11:48:20AM +, Mel Gorman wrote:
> > + if (!zone)
> > + zone = page_zone(pfn_to_page(pfn));
> > + else if (zone != page_zone(pfn_to_page(pfn)))
> > + break;
> > +
>
> So what
_block() is left with only minor changes.
>
The minor changes to isolate_freepages_block() look fine in
terms of how current compaction works. I have a minor comment on
isolate_freepages_range() but it is up to you whether to address them
or not. Whether you alter isolate_freepages_range() or no
isolate_migratepages() function is implemented as a simple wrapper
> around isolate_migratepages_range().
>
> Signed-off-by: Michal Nazarewicz
> Signed-off-by: Marek Szyprowski
Super, this is much easier to read. I have just one nit below but once
that is fixed;
Acked-by: Mel Gorm
is necessary. I think what
you need to do is check if a page with a count of 0 is encountered
and if it is, then a draining of the per-cpu lists is necessary. To
address Gilad's concerns, be sure to only this this once per attempt at
CMA rather than for every page encountered with a count of 0 to av
are affected;
Acked-by: Mel Gorman
Thanks
--
Mel Gorman
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
> >>+ lru_add_drain_all();
> >>+ drain_all_pages();
> >>+ if (WARN_ON(test_pages_isolated(start, end)))
> >>+ ret = -EBUSY;
> >>+ }
>
> On Tue, 10 Jan 2012 15:16:13 +0100, Mel Gorman wrote:
> >Another global IPI seems ove
elist(zone, z, zonelist, high_zoneidx)
> - wakeup_kswapd(zone, order, classzone_idx);
> -}
> -
> static inline int
> gfp_to_alloc_flags(gfp_t gfp_mask)
> {
> @@ -5917,7 +5958,7 @@ int alloc_contig_range(unsigned long start, unsigned
> long end,
>
>
page += pageblock_nr_pages) {
> + int mt = get_pageblock_migratetype(page);
> + if (mt != MIGRATE_ISOLATE && !is_migrate_cma(mt))
> + set_pageblock_migratetype(page,
> + MIGRATE_MOVABLE);
> + }
> }
>
> return 1 << order;
> @@ -5599,8 +5635,8 @@ __count_immobile_pages(struct zone *zone, struct page
> *page, int count)
>*/
> if (zone_idx(zone) == ZONE_MOVABLE)
> return true;
> -
> - if (get_pageblock_migratetype(page) == MIGRATE_MOVABLE)
> + if (get_pageblock_migratetype(page) == MIGRATE_MOVABLE ||
> + is_pageblock_cma(page))
> return true;
>
> pfn = page_to_pfn(page);
> diff --git a/mm/vmstat.c b/mm/vmstat.c
> index 8fd603b..9fb1789 100644
> --- a/mm/vmstat.c
> +++ b/mm/vmstat.c
> @@ -613,6 +613,7 @@ static char * const migratetype_names[MIGRATE_TYPES] = {
> "Reclaimable",
> "Movable",
> "Reserve",
> + "Cma",
> "Isolate",
> };
>
> --
> 1.7.1.569.g6f426
>
--
Mel Gorman
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
t; + if (!outer_end) {
> + ret = -EBUSY;
> + goto done;
> + }
> + outer_end += outer_start;
> +
> + /* Free head and tail (if any) */
> + if (start != outer_start)
> + free_contig_range(outer_start, start - outer_start);
&
cc->migrate_pfn = low_pfn;
>
> trace_mm_compaction_isolate_migratepages(nr_scanned, nr_isolated);
>
> + return low_pfn;
> +}
> +
> +/*
> + * Isolate all pages that can be migrated from the block pointed to by
> + * the migrate scanner within compact_control.
On Mon, Dec 12, 2011 at 05:46:13PM +0100, Michal Nazarewicz wrote:
> On Mon, 12 Dec 2011 17:30:52 +0100, Mel Gorman wrote:
>
> >On Mon, Dec 12, 2011 at 04:22:39PM +0100, Michal Nazarewicz wrote:
> >>>
> >>>
> >>>>+ if (!pfn_v
bort on invalid or non-free page, but continue as usual if
> freelist is provided.
>
Ok, I think you should be able to do that by not calling split_free_page
or adding to the list if !freelist with a comment explaining why the
pages are left on the buddy lists for the caller to figure out. Bail
>>+ page += isolated;
> >>+ else
> >>+ page = pfn_to_page(pfn);
> >>}
>
> On Mon, 12 Dec 2011 15:19:53 +0100, Mel Gorman wrote:
> >Is this necessary?
> >
> >We are isolating pages, the largest of wh
On Mon, Dec 12, 2011 at 03:41:04PM +0100, Michal Nazarewicz wrote:
> On Mon, 12 Dec 2011 15:29:07 +0100, Mel Gorman wrote:
>
> >On Fri, Nov 18, 2011 at 05:43:11PM +0100, Marek Szyprowski wrote:
> >>From: Michal Nazarewicz
> >>
> >>This commit exports some
e no longer matches the desired migrate
> >>+* type.
> >>+*/
> >>+ if (get_pageblock_migratetype(page) == MIGRATE_ISOLATE)
> >>+ set_page_private(page, MIGRATE_ISOLATE);
> >>+
>
> On Mon, 12 Dec 2011 14:42:35
>>+ page += isolated;
> >>+ else
> >>+ page = pfn_to_page(pfn);
> >>}
>
> On Mon, 12 Dec 2011 15:19:53 +0100, Mel Gorman wrote:
> >Is this necessary?
> >
> >We are isolating pages, the largest of whi
ines like COMPACTBLOCKS, I'm not
even sure compaction.c can compile without CONFIG_COMPACTION because
of the vmstat stuff.
--
Mel Gorman
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
one_pfn_same_memmap based on pfn & PAGE_SECTION_MASK and contain it
within mm/compaction.c
That said, everywhere else managed to avoid checks like this by always
scanning in units of pageblocks. Maybe this should be structured
the same way to guarantee pfn_valid is called at least per pageblo
t;
> + return low_pfn;
> +}
> +
> +/*
> + * Isolate all pages that can be migrated from the block pointed to by
> + * the migrate scanner within compact_control.
> + */
> +static isolate_migrate_t isolate_migratepages(struct zone *zone,
> +
eeing
pages but it should be unnecessary to call it again. I'd go as far
to say that it would be preferable to drain the per-CPU lists after
you set pageblocks MIGRATE_ISOLATE. The IPIs also have overhead but it
will be incurred for the rare rather than the common case.
--
Mel Gorman
SU
; well? I sort of like the idea of making it static and removing from
> header file.
>
I see no problem with that. It'll be separate from split_page() but that
is not earth shattering.
Thanks.
--
Mel Gorman
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe lin
>> in range. Caller has to guarantee that all pages in range
> >> are in buddy system.
>
> On Tue, 18 Oct 2011 05:21:09 -0700, Mel Gorman wrote:
> > Straight away, I'm wondering why you didn't use
> > mm/compaction.c#isolate_freepages()
>
> Doe
On Tue, Oct 18, 2011 at 10:26:37AM -0700, Michal Nazarewicz wrote:
> On Tue, 18 Oct 2011 05:21:09 -0700, Mel Gorman wrote:
>
> >At this point, I'm going to apologise for not reviewing this a long long
> >time ago.
> >
> >On Thu, Oct 06, 2011 at 03:54:42PM +020
On Thu, Oct 06, 2011 at 03:54:46PM +0200, Marek Szyprowski wrote:
> The Contiguous Memory Allocator is a set of helper functions for DMA
> mapping framework that improves allocations of contiguous memory chunks.
>
> CMA grabs memory on system boot, marks it with CMA_MIGRATE_TYPE and
> gives back t
On Thu, Oct 06, 2011 at 03:54:44PM +0200, Marek Szyprowski wrote:
> From: Michal Nazarewicz
>
> The MIGRATE_CMA migration type has two main characteristics:
> (i) only movable pages can be allocated from MIGRATE_CMA
> pageblocks and (ii) page allocator will never change migration
> type of MIGRAT
On Thu, Oct 06, 2011 at 03:54:43PM +0200, Marek Szyprowski wrote:
> From: Michal Nazarewicz
>
> This commit adds the alloc_contig_range() function which tries
> to allocate given range of pages. It tries to migrate all
> already allocated pages that fall in the range thus freeing them.
> Once al
At this point, I'm going to apologise for not reviewing this a long long
time ago.
On Thu, Oct 06, 2011 at 03:54:42PM +0200, Marek Szyprowski wrote:
> From: KAMEZAWA Hiroyuki
>
> This commit introduces alloc_contig_freed_pages() function
> which allocates (ie. removes from buddy system) free pag
n. It also knows how to avoid
isolating so much memory as to put the system in the risk of being livelocked,
isolate pages from the LRU in batch etc.
This is not a show-stopper as such but personally I would prefer that
the memory hotplug code be sharing code with compaction than CMA adding
a new d
_ _
> | Humble Liege of Serenely Enlightened Majesty of o' \,=./ `o
> | Computer Science, Micha?? "mina86" Nazarewicz (o o)
> +[mina86*mina86.com]---[mina86*jabber.org]ooO--(_)--Ooo--
>
> --
> To unsubscribe, send a message with 'unsubs
s are easily freed/moved aside to satisfy
> large contiguous allocations.
>
Relatively handy to do something like this. It can also be somewhat
contrained by doing something similar to MIGRATE_ISOLATE to have
contiguous regions of memory in a zone unusable by non-movable
allocationos. I
50 matches
Mail list logo