On Tue, Jan 26, 2016 at 04:00:02PM -0500, Johannes Weiner wrote:

> @@ -683,17 +683,17 @@ int __set_page_dirty_buffers(struct page *page)
>               } while (bh != head);
>       }
>       /*
> -      * Use mem_group_begin_page_stat() to keep PageDirty synchronized with
> -      * per-memcg dirty page counters.
> +      * Lock out page->mem_cgroup migration to keep PageDirty
> +      * synchronized with per-memcg dirty page counters.
>        */
> -     memcg = mem_cgroup_begin_page_stat(page);
> +     memcg = lock_page_memcg(page);
>       newly_dirty = !TestSetPageDirty(page);
>       spin_unlock(&mapping->private_lock);
>  
>       if (newly_dirty)
>               __set_page_dirty(page, mapping, memcg, 1);

Do we really want to pass memcg to __set_page_dirty and then to
account_page_dirtied, increasing stack/regs usage even in case memory
cgroup is disabled? May be, it'd be better to make
mem_cgroup_update_page_stat take a page instead of a memcg?

Thanks,
Vladimir

>  
> -     mem_cgroup_end_page_stat(memcg);
> +     unlock_page_memcg(memcg);
>  
>       if (newly_dirty)
>               __mark_inode_dirty(mapping->host, I_DIRTY_PAGES);

Reply via email to