forked from Minki/linux
flush_tlb_range() needs ->page_table_lock when ->mmap_sem is not held
All other callers already hold either ->mmap_sem (exclusive) or ->page_table_lock. And we need it because some page table flushing instanced do work explicitly with ge tables. See e.g. arch/powerpc/mm/tlb_hash32.c, flush_tlb_range() and flush_range() in there. The same goes for uml, with a lot more extensive playing with page tables. Almost all callers are actually fine - flush_tlb_range() may have no need to bother playing with page tables, but it can do so safely; again, this caller is the sole exception - everything else either has exclusive ->mmap_sem on the mm in question, or mm->page_table_lock is held. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
parent
835ee7978c
commit
cd2934a3b3
@ -2277,8 +2277,8 @@ void __unmap_hugepage_range(struct vm_area_struct *vma, unsigned long start,
|
||||
set_page_dirty(page);
|
||||
list_add(&page->lru, &page_list);
|
||||
}
|
||||
spin_unlock(&mm->page_table_lock);
|
||||
flush_tlb_range(vma, start, end);
|
||||
spin_unlock(&mm->page_table_lock);
|
||||
mmu_notifier_invalidate_range_end(mm, start, end);
|
||||
list_for_each_entry_safe(page, tmp, &page_list, lru) {
|
||||
page_remove_rmap(page);
|
||||
|
Loading…
Reference in New Issue
Block a user