mm/mempolicy: fix NUMA_INTERLEAVE_HIT counter
authorAndrey Ryabinin <aryabinin@virtuozzo.com>
Fri, 13 Oct 2017 22:57:43 +0000 (15:57 -0700)
committerLinus Torvalds <torvalds@linux-foundation.org>
Fri, 13 Oct 2017 23:18:32 +0000 (16:18 -0700)
Commit 3a321d2a3dde ("mm: change the call sites of numa statistics
items") separated NUMA counters from zone counters, but the
NUMA_INTERLEAVE_HIT call site wasn't updated to use the new interface.
So alloc_page_interleave() actually increments NR_ZONE_INACTIVE_FILE
instead of NUMA_INTERLEAVE_HIT.

Fix this by using __inc_numa_state() interface to increment
NUMA_INTERLEAVE_HIT.

Link: http://lkml.kernel.org/r/20171003191003.8573-1-aryabinin@virtuozzo.com
Fixes: 3a321d2a3dde ("mm: change the call sites of numa statistics items")
Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
Acked-by: Mel Gorman <mgorman@techsingularity.net>
Cc: Kemi Wang <kemi.wang@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
mm/mempolicy.c

index 006ba62..a2af6d5 100644 (file)
@@ -1920,8 +1920,11 @@ static struct page *alloc_page_interleave(gfp_t gfp, unsigned order,
        struct page *page;
 
        page = __alloc_pages(gfp, order, nid);
-       if (page && page_to_nid(page) == nid)
-               inc_zone_page_state(page, NUMA_INTERLEAVE_HIT);
+       if (page && page_to_nid(page) == nid) {
+               preempt_disable();
+               __inc_numa_state(page_zone(page), NUMA_INTERLEAVE_HIT);
+               preempt_enable();
+       }
        return page;
 }