mm: thp: don't need care deferred split queue in memcg charge move path
authorWei Yang <richardw.yang@linux.intel.com>
Fri, 31 Jan 2020 06:11:20 +0000 (22:11 -0800)
committerLinus Torvalds <torvalds@linux-foundation.org>
Fri, 31 Jan 2020 18:30:36 +0000 (10:30 -0800)
If compound is true, this means it is a PMD mapped THP.  Which implies
the page is not linked to any defer list.  So the first code chunk will
not be executed.

Also with this reason, it would not be proper to add this page to a
defer list.  So the second code chunk is not correct.

Based on this, we should remove the defer list related code.

[yang.shi@linux.alibaba.com: better patch title]
Link: http://lkml.kernel.org/r/20200117233836.3434-1-richardw.yang@linux.intel.com
Fixes: 87eaceb3faa5 ("mm: thp: make deferred split shrinker memcg aware")
Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
Suggested-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Yang Shi <yang.shi@linux.alibaba.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
Cc: <stable@vger.kernel.org> [5.4+]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
mm/memcontrol.c

index 6c83cf4ed970b90e46b29d56a75ba20de4d580b9..27c231bf45657ce7811198d8973e46aa20191ee6 100644 (file)
@@ -5340,14 +5340,6 @@ static int mem_cgroup_move_account(struct page *page,
                __mod_lruvec_state(to_vec, NR_WRITEBACK, nr_pages);
        }
 
-#ifdef CONFIG_TRANSPARENT_HUGEPAGE
-       if (compound && !list_empty(page_deferred_list(page))) {
-               spin_lock(&from->deferred_split_queue.split_queue_lock);
-               list_del_init(page_deferred_list(page));
-               from->deferred_split_queue.split_queue_len--;
-               spin_unlock(&from->deferred_split_queue.split_queue_lock);
-       }
-#endif
        /*
         * It is safe to change page->mem_cgroup here because the page
         * is referenced, charged, and isolated - we can't race with
@@ -5357,16 +5349,6 @@ static int mem_cgroup_move_account(struct page *page,
        /* caller should have done css_get */
        page->mem_cgroup = to;
 
-#ifdef CONFIG_TRANSPARENT_HUGEPAGE
-       if (compound && list_empty(page_deferred_list(page))) {
-               spin_lock(&to->deferred_split_queue.split_queue_lock);
-               list_add_tail(page_deferred_list(page),
-                             &to->deferred_split_queue.split_queue);
-               to->deferred_split_queue.split_queue_len++;
-               spin_unlock(&to->deferred_split_queue.split_queue_lock);
-       }
-#endif
-
        spin_unlock_irqrestore(&from->move_lock, flags);
 
        ret = 0;