powerpc/mm/tlbflush: update the mmu_gather page size while iterating address range

This patch makes sure we update the mmu_gather page size even if we are
requesting for a fullmm flush. This avoids triggering VM_WARN_ON in code
paths like __tlb_remove_page_size that explicitly check for removing range page
size to be same as mmu gather page size.

Fixes: 5a6099346c ("powerpc/64s/radix: tlb do not flush on page size when fullmm")
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Acked-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
This commit is contained in:
Aneesh Kumar K.V 2018-08-09 19:06:59 +05:30 committed by Michael Ellerman
parent fce278af81
commit 0b6aa1a20a
1 changed files with 2 additions and 4 deletions

View File

@ -49,13 +49,11 @@ static inline void __tlb_remove_tlb_entry(struct mmu_gather *tlb, pte_t *ptep,
static inline void tlb_remove_check_page_size_change(struct mmu_gather *tlb, static inline void tlb_remove_check_page_size_change(struct mmu_gather *tlb,
unsigned int page_size) unsigned int page_size)
{ {
if (tlb->fullmm)
return;
if (!tlb->page_size) if (!tlb->page_size)
tlb->page_size = page_size; tlb->page_size = page_size;
else if (tlb->page_size != page_size) { else if (tlb->page_size != page_size) {
tlb_flush_mmu(tlb); if (!tlb->fullmm)
tlb_flush_mmu(tlb);
/* /*
* update the page size after flush for the new * update the page size after flush for the new
* mmu_gather. * mmu_gather.