mirror of
https://github.com/torvalds/linux.git
synced 2024-11-24 13:11:40 +00:00
cafdd8ba08
try_to_unmap_cluster() does: for (pte = pte_offset_map(pmd, address); address < end; pte++, address += PAGE_SIZE) { ... } pte_unmap(pte); It may take a little staring to notice, but pte can actually fall off the end of the pte page in this iteration, which makes life difficult for kmap_atomic() and the users not expecting it to BUG(). Of course, we're somewhat lucky in that arithmetic elsewhere in the function guarantees that at least one iteration is made, lest this force larger rearrangements to be made. This issue and patch also apply to non-mm mainline and with trivial adjustments, at least two related kernels. Discovered during internal testing at Oracle. Signed-off-by: William Irwin <wli@holomorphy.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org> |
||
---|---|---|
.. | ||
bootmem.c | ||
fadvise.c | ||
filemap.c | ||
fremap.c | ||
highmem.c | ||
hugetlb.c | ||
internal.h | ||
madvise.c | ||
Makefile | ||
memory.c | ||
mempolicy.c | ||
mempool.c | ||
mincore.c | ||
mlock.c | ||
mmap.c | ||
mprotect.c | ||
mremap.c | ||
msync.c | ||
nommu.c | ||
oom_kill.c | ||
page_alloc.c | ||
page_io.c | ||
page-writeback.c | ||
pdflush.c | ||
prio_tree.c | ||
readahead.c | ||
rmap.c | ||
shmem.c | ||
slab.c | ||
swap_state.c | ||
swap.c | ||
swapfile.c | ||
thrash.c | ||
tiny-shmem.c | ||
truncate.c | ||
vmalloc.c | ||
vmscan.c |