mm: memcontrol: prepare move_account for removal of private page type counters
authorJohannes Weiner <hannes@cmpxchg.org>
Wed, 3 Jun 2020 23:01:47 +0000 (16:01 -0700)
committerLinus Torvalds <torvalds@linux-foundation.org>
Thu, 4 Jun 2020 03:09:47 +0000 (20:09 -0700)
When memcg uses the generic vmstat counters, it doesn't need to do
anything at charging and uncharging time.  It does, however, need to
migrate counts when pages move to a different cgroup in move_account.

Prepare the move_account function for the arrival of NR_FILE_PAGES,
NR_ANON_MAPPED, NR_ANON_THPS etc.  by having a branch for files and a
branch for anon, which can then divided into sub-branches.

Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Reviewed-by: Alex Shi <alex.shi@linux.alibaba.com>
Reviewed-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Balbir Singh <bsingharora@gmail.com>
Link: http://lkml.kernel.org/r/20200508183105.225460-8-hannes@cmpxchg.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
mm/memcontrol.c

index ff45bef..f82ae37 100644 (file)
@@ -5434,7 +5434,6 @@ static int mem_cgroup_move_account(struct page *page,
        struct pglist_data *pgdat;
        unsigned int nr_pages = compound ? hpage_nr_pages(page) : 1;
        int ret;
-       bool anon;
 
        VM_BUG_ON(from == to);
        VM_BUG_ON_PAGE(PageLRU(page), page);
@@ -5452,25 +5451,27 @@ static int mem_cgroup_move_account(struct page *page,
        if (page->mem_cgroup != from)
                goto out_unlock;
 
-       anon = PageAnon(page);
-
        pgdat = page_pgdat(page);
        from_vec = mem_cgroup_lruvec(from, pgdat);
        to_vec = mem_cgroup_lruvec(to, pgdat);
 
        lock_page_memcg(page);
 
-       if (!anon && page_mapped(page)) {
-               __mod_lruvec_state(from_vec, NR_FILE_MAPPED, -nr_pages);
-               __mod_lruvec_state(to_vec, NR_FILE_MAPPED, nr_pages);
-       }
+       if (!PageAnon(page)) {
+               if (page_mapped(page)) {
+                       __mod_lruvec_state(from_vec, NR_FILE_MAPPED, -nr_pages);
+                       __mod_lruvec_state(to_vec, NR_FILE_MAPPED, nr_pages);
+               }
 
-       if (!anon && PageDirty(page)) {
-               struct address_space *mapping = page_mapping(page);
+               if (PageDirty(page)) {
+                       struct address_space *mapping = page_mapping(page);
 
-               if (mapping_cap_account_dirty(mapping)) {
-                       __mod_lruvec_state(from_vec, NR_FILE_DIRTY, -nr_pages);
-                       __mod_lruvec_state(to_vec, NR_FILE_DIRTY, nr_pages);
+                       if (mapping_cap_account_dirty(mapping)) {
+                               __mod_lruvec_state(from_vec, NR_FILE_DIRTY,
+                                                  -nr_pages);
+                               __mod_lruvec_state(to_vec, NR_FILE_DIRTY,
+                                                  nr_pages);
+                       }
                }
        }