mm/mempolicy: fix NUMA_INTERLEAVE_HIT counter

Commit 3a321d2a3d ("mm: change the call sites of numa statistics
items") separated NUMA counters from zone counters, but the
NUMA_INTERLEAVE_HIT call site wasn't updated to use the new interface.
So alloc_page_interleave() actually increments NR_ZONE_INACTIVE_FILE
instead of NUMA_INTERLEAVE_HIT.

Fix this by using __inc_numa_state() interface to increment
NUMA_INTERLEAVE_HIT.

Link: http://lkml.kernel.org/r/20171003191003.8573-1-aryabinin@virtuozzo.com
Fixes: 3a321d2a3d ("mm: change the call sites of numa statistics items")
Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
Acked-by: Mel Gorman <mgorman@techsingularity.net>
Cc: Kemi Wang <kemi.wang@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
Andrey Ryabinin 2017-10-13 15:57:43 -07:00 committed by Linus Torvalds
parent 8a1ac5dc7b
commit de55c8b251
1 changed files with 5 additions and 2 deletions

View File

@ -1920,8 +1920,11 @@ static struct page *alloc_page_interleave(gfp_t gfp, unsigned order,
struct page *page; struct page *page;
page = __alloc_pages(gfp, order, nid); page = __alloc_pages(gfp, order, nid);
if (page && page_to_nid(page) == nid) if (page && page_to_nid(page) == nid) {
inc_zone_page_state(page, NUMA_INTERLEAVE_HIT); preempt_disable();
__inc_numa_state(page_zone(page), NUMA_INTERLEAVE_HIT);
preempt_enable();
}
return page; return page;
} }