hugetlbfs: flush before unlock on move_hugetlb_page_tables()
We must flush the TLB before releasing i_mmap_rwsem to avoid the
potential reuse of an unshared PMDs page. This is not true in the case
of move_hugetlb_page_tables(). The last reference on the page table can
therefore be dropped before the TLB flush took place.
Prevent it by reordering the operations and flushing the TLB before
releasing i_mmap_rwsem.
Fixes: 550a7d60bd
("mm, hugepages: add mremap() support for hugepage backed vma")
Signed-off-by: Nadav Amit <namit@vmware.com>
Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Mina Almasry <almasrymina@google.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
parent
a4a118f2ee
commit
13e4ad2ce8
@ -4919,9 +4919,9 @@ int move_hugetlb_page_tables(struct vm_area_struct *vma,
|
||||
|
||||
move_huge_pte(vma, old_addr, new_addr, src_pte);
|
||||
}
|
||||
i_mmap_unlock_write(mapping);
|
||||
flush_tlb_range(vma, old_end - len, old_end);
|
||||
mmu_notifier_invalidate_range_end(&range);
|
||||
i_mmap_unlock_write(mapping);
|
||||
|
||||
return len + old_addr - old_end;
|
||||
}
|
||||
|
Loading…
Reference in New Issue
Block a user