mm: numa: disable change protection for vma(VM_HUGETLB)
Currently when a process accesses a hugetlb range protected with PROTNONE, unexpected COWs are triggered, which finally puts the hugetlb subsystem into a broken/uncontrollable state, where for example h->resv_huge_pages is subtracted too much and wraps around to a very large number, and the free hugepage pool is no longer maintainable. This patch simply stops changing protection for vma(VM_HUGETLB) to fix the problem. And this also allows us to avoid useless overhead of minor faults. Signed-off-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Suggested-by: Mel Gorman <mgorman@suse.de> Cc: Hugh Dickins <hughd@google.com> Cc: "Kirill A. Shutemov" <kirill@shutemov.name> Cc: David Rientjes <rientjes@google.com> Cc: Rik van Riel <riel@redhat.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
parent
ce66b032ad
commit
6b79c57b92
|
@ -2165,8 +2165,10 @@ void task_numa_work(struct callback_head *work)
|
|||
vma = mm->mmap;
|
||||
}
|
||||
for (; vma; vma = vma->vm_next) {
|
||||
if (!vma_migratable(vma) || !vma_policy_mof(vma))
|
||||
if (!vma_migratable(vma) || !vma_policy_mof(vma) ||
|
||||
is_vm_hugetlb_page(vma)) {
|
||||
continue;
|
||||
}
|
||||
|
||||
/*
|
||||
* Shared library pages mapped by multiple processes are not
|
||||
|
|
Loading…
Reference in New Issue