On Fri 08-05-20 14:31:03, Johannes Weiner wrote:
[...]
> diff --git a/mm/memory.c b/mm/memory.c
> index 832ee914cbcf..93900b121b6e 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -3125,9 +3125,20 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
>                       page = alloc_page_vma(GFP_HIGHUSER_MOVABLE, vma,
>                                                       vmf->address);
>                       if (page) {
> +                             int err;
> +
>                               __SetPageLocked(page);
>                               __SetPageSwapBacked(page);
>                               set_page_private(page, entry.val);
> +
> +                             /* Tell memcg to use swap ownership records */
> +                             SetPageSwapCache(page);
> +                             err = mem_cgroup_charge(page, vma->vm_mm,
> +                                                     GFP_KERNEL, false);
> +                             ClearPageSwapCache(page);
> +                             if (err)
> +                                     goto out_page;

err would be a return value from try_charge and that can be -ENOMEM. Now
we almost never return ENOMEM for GFP_KERNEL single page charge. Except
for async OOM handling (oom_disabled v1). So this needs translation to
VM_FAULT_OOM.

I am not an expert on the swap code so I might have missed some subtle
issues but the rest of the patch seems reasonable to me.
-- 
Michal Hocko
SUSE Labs

Reply via email to