On Tue, Nov 10, 2020 at 08:32:40PM +0100, David Hildenbrand wrote:
> commit 6471384af2a6 ("mm: security: introduce init_on_alloc=1 and
> init_on_free=1 boot options") resulted with init_on_alloc=1 in all pages
> leaving the buddy via alloc_pages() and friends to be
> initialized/cleared/zeroed on allocation.
> 
> However, the same logic is currently not applied to
> alloc_contig_pages(): allocated pages leaving the buddy aren't cleared
> with init_on_alloc=1 and init_on_free=0. Let's also properly clear
> pages on that allocation path and add support for __GFP_ZERO.
> 
> With this change, we will see double clearing of pages in some
> cases. One example are gigantic pages (either allocated via CMA, or
> allocated dynamically via alloc_contig_pages()) - which is the right
> thing to do (and to be optimized outside of the buddy in the callers) as
> discussed in:
>   https://lkml.kernel.org/r/[email protected]
> 
> This change implies that with init_on_alloc=1
> - All CMA allocations will be cleared
> - Gigantic pages allocated via alloc_contig_pages() will be cleared
> - virtio-mem memory to be unplugged will be cleared. While this is
>   suboptimal, it's similar to memory balloon drivers handling, where
>   all pages to be inflated will get cleared as well.
> 
> Cc: Andrew Morton <[email protected]>
> Cc: Alexander Potapenko <[email protected]>
> Cc: Michal Hocko <[email protected]>
> Cc: Mike Kravetz <[email protected]>
> Cc: Vlastimil Babka <[email protected]>
> Cc: Mike Rapoport <[email protected]>
> Cc: Oscar Salvador <[email protected]>
> Cc: Kees Cook <[email protected]>
> Cc: Michael Ellerman <[email protected]>
> Signed-off-by: David Hildenbrand <[email protected]>
> ---
>  mm/page_alloc.c | 24 +++++++++++++++++++++---
>  1 file changed, 21 insertions(+), 3 deletions(-)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index eed4f4075b3c..0361b119b74e 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -8453,6 +8453,19 @@ static int __alloc_contig_migrate_range(struct 
> compact_control *cc,
>       return 0;
>  }
>  
> +static void __alloc_contig_clear_range(unsigned long start_pfn,
> +                                    unsigned long end_pfn)

Maybe clear_contig_range() ?

> +{
> +     unsigned long pfn;
> +
> +     for (pfn = start_pfn; pfn < end_pfn; pfn += MAX_ORDER_NR_PAGES) {
> +             cond_resched();
> +             kernel_init_free_pages(pfn_to_page(pfn),
> +                                    min_t(unsigned long, end_pfn - pfn,
> +                                          MAX_ORDER_NR_PAGES));
> +     }
> +}
> +
>  /**
>   * alloc_contig_range() -- tries to allocate given range of pages
>   * @start:   start PFN to allocate
> @@ -8461,7 +8474,8 @@ static int __alloc_contig_migrate_range(struct 
> compact_control *cc,
>   *                   #MIGRATE_MOVABLE or #MIGRATE_CMA).  All pageblocks
>   *                   in range must have the same migratetype and it must
>   *                   be either of the two.
> - * @gfp_mask:        GFP mask to use during compaction
> + * @gfp_mask:        GFP mask to use during compaction. __GFP_ZERO clears 
> allocated
> + *           pages.

"__GFP_ZERO is not passed to compaction but rather clears allocated pages"

>   *
>   * The PFN range does not have to be pageblock or MAX_ORDER_NR_PAGES
>   * aligned.  The PFN range must belong to a single zone.
> @@ -8488,7 +8502,7 @@ int alloc_contig_range(unsigned long start, unsigned 
> long end,
>               .mode = MIGRATE_SYNC,
>               .ignore_skip_hint = true,
>               .no_set_skip_hint = true,
> -             .gfp_mask = current_gfp_context(gfp_mask),
> +             .gfp_mask = current_gfp_context(gfp_mask & ~__GFP_ZERO),
>               .alloc_contig = true,
>       };
>       INIT_LIST_HEAD(&cc.migratepages);
> @@ -8600,6 +8614,9 @@ int alloc_contig_range(unsigned long start, unsigned 
> long end,
>       if (end != outer_end)
>               free_contig_range(end, outer_end - end);
>  
> +     if (!want_init_on_free() && want_init_on_alloc(gfp_mask))
> +             __alloc_contig_clear_range(start, end);
> +
>  done:
>       undo_isolate_page_range(pfn_max_align_down(start),
>                               pfn_max_align_up(end), migratetype);
> @@ -8653,7 +8670,8 @@ static bool zone_spans_last_pfn(const struct zone *zone,
>  /**
>   * alloc_contig_pages() -- tries to find and allocate contiguous range of 
> pages
>   * @nr_pages:        Number of contiguous pages to allocate
> - * @gfp_mask:        GFP mask to limit search and used during compaction
> + * @gfp_mask:        GFP mask to limit search and used during compaction. 
> __GFP_ZERO
> + *           clears allocated pages.
>   * @nid:     Target node
>   * @nodemask:        Mask for other possible nodes
>   *
> -- 
> 2.26.2
> 

-- 
Sincerely yours,
Mike.

Reply via email to