From: Andrey Ryabinin Date: Fri, 13 Oct 2017 22:57:43 +0000 (-0700) Subject: mm/mempolicy: fix NUMA_INTERLEAVE_HIT counter X-Git-Tag: v4.14-rc5~13^2~13 X-Git-Url: http://review.tizen.org/git/?a=commitdiff_plain;h=de55c8b251974247edda38e952da8e8dd71683ec;p=platform%2Fkernel%2Flinux-rpi.git mm/mempolicy: fix NUMA_INTERLEAVE_HIT counter Commit 3a321d2a3dde ("mm: change the call sites of numa statistics items") separated NUMA counters from zone counters, but the NUMA_INTERLEAVE_HIT call site wasn't updated to use the new interface. So alloc_page_interleave() actually increments NR_ZONE_INACTIVE_FILE instead of NUMA_INTERLEAVE_HIT. Fix this by using __inc_numa_state() interface to increment NUMA_INTERLEAVE_HIT. Link: http://lkml.kernel.org/r/20171003191003.8573-1-aryabinin@virtuozzo.com Fixes: 3a321d2a3dde ("mm: change the call sites of numa statistics items") Signed-off-by: Andrey Ryabinin Acked-by: Mel Gorman Cc: Kemi Wang Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 006ba62..a2af6d5 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -1920,8 +1920,11 @@ static struct page *alloc_page_interleave(gfp_t gfp, unsigned order, struct page *page; page = __alloc_pages(gfp, order, nid); - if (page && page_to_nid(page) == nid) - inc_zone_page_state(page, NUMA_INTERLEAVE_HIT); + if (page && page_to_nid(page) == nid) { + preempt_disable(); + __inc_numa_state(page_zone(page), NUMA_INTERLEAVE_HIT); + preempt_enable(); + } return page; }