mm: release the spinlock on zap_pte_range

In our testing (camera recording), Miguel and Wei found
unmap_page_range() takes above 6ms with preemption disabled easily.
When I see that, the reason is it holds page table spinlock during
entire 512 page operation in a PMD.  6.2ms is never trivial for user
experince if RT task couldn't run in the time because it could make
frame drop or glitch audio problem.

I had a time to benchmark it via adding some trace_printk hooks between
pte_offset_map_lock and pte_unmap_unlock in zap_pte_range.  The testing
device is 2018 premium mobile device.

I can get 2ms delay rather easily to release 2M(ie, 512 pages) when the
task runs on little core even though it doesn't have any IPI and LRU
lock contention.  It's already too heavy.

If I remove activate_page, 35-40% overhead of zap_pte_range is gone so
most of overhead(about 0.7ms) comes from activate_page via
mark_page_accessed.  Thus, if there are LRU contention, that 0.7ms could
accumulate up to several ms.

So this patch adds a check for need_resched() in the loop, and a
preemption point if necessary.

Link: http://lkml.kernel.org/r/20190731061440.GC155569@google.com
Signed-off-by: Minchan Kim <minchan@kernel.org>
Reported-by: Miguel de Dios <migueldedios@google.com>
Reported-by: Wei Wang <wvw@google.com>
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Mel Gorman <mgorman@techsingularity.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
Minchan Kim 2019-09-24 00:02:24 +00:00 committed by Linus Torvalds
parent 9da99f20ec
commit 7b167b6810
1 changed files with 8 additions and 2 deletions

View File

@ -1026,6 +1026,9 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb,
if (pte_none(ptent))
continue;
if (need_resched())
break;
if (pte_present(ptent)) {
struct page *page;
@ -1123,8 +1126,11 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb,
if (force_flush) {
force_flush = 0;
tlb_flush_mmu(tlb);
if (addr != end)
goto again;
}
if (addr != end) {
cond_resched();
goto again;
}
return addr;