mm/hmm: do not erase snapshot when a range is invalidated
Users of HMM might be using the snapshot information to do preparatory step like dma mapping pages to a device before checking for invalidation through hmm_vma_range_done() so do not erase that information and assume users will do the right thing. Link: http://lkml.kernel.org/r/20190403193318.16478-4-jglisse@redhat.com Signed-off-by: Jérôme Glisse <jglisse@redhat.com> Reviewed-by: Ralph Campbell <rcampbell@nvidia.com> Reviewed-by: John Hubbard <jhubbard@nvidia.com> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Balbir Singh <bsingharora@gmail.com> Cc: Dan Carpenter <dan.carpenter@oracle.com> Cc: Ira Weiny <ira.weiny@intel.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Souptick Joarder <jrdr.linux@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
parent
704f3f2cf6
commit
9f454612f6
6
mm/hmm.c
6
mm/hmm.c
|
@ -174,16 +174,10 @@ static int hmm_invalidate_range(struct hmm *hmm, bool device,
|
|||
|
||||
spin_lock(&hmm->lock);
|
||||
list_for_each_entry(range, &hmm->ranges, list) {
|
||||
unsigned long addr, idx, npages;
|
||||
|
||||
if (update->end < range->start || update->start >= range->end)
|
||||
continue;
|
||||
|
||||
range->valid = false;
|
||||
addr = max(update->start, range->start);
|
||||
idx = (addr - range->start) >> PAGE_SHIFT;
|
||||
npages = (min(range->end, update->end) - addr) >> PAGE_SHIFT;
|
||||
memset(&range->pfns[idx], 0, sizeof(*range->pfns) * npages);
|
||||
}
|
||||
spin_unlock(&hmm->lock);
|
||||
|
||||
|
|
Loading…
Reference in New Issue