forked from Minki/linux
b4955ce3dd
It's common practice to msync a large address range regularly, in which often only a few ptes have actually been dirtied since the previous pass. sync_pte_range then goes much faster if it tests whether pte is dirty before locating and accessing each struct page cacheline; and it is hardly slowed by ptep_clear_flush_dirty repeating that test in the opposite case, when every pte actually is dirty. But beware, s390's pte_dirty always says false, since its dirty bit is kept in the storage key, located via the struct page address. So skip this optimization in its case: use a pte_maybe_dirty macro which just says true if page_test_and_clear_dirty is implemented. Signed-off-by: Abhijit Karmarkar <abhijitk@veritas.com> Signed-off-by: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org> |
||
---|---|---|
.. | ||
4level-fixup.h | ||
bitops.h | ||
bug.h | ||
cputime.h | ||
div64.h | ||
dma-mapping-broken.h | ||
dma-mapping.h | ||
errno-base.h | ||
errno.h | ||
hdreg.h | ||
ide_iops.h | ||
iomap.h | ||
ipc.h | ||
local.h | ||
pci-dma-compat.h | ||
pci.h | ||
percpu.h | ||
pgtable-nopmd.h | ||
pgtable-nopud.h | ||
pgtable.h | ||
resource.h | ||
rtc.h | ||
sections.h | ||
siginfo.h | ||
signal.h | ||
statfs.h | ||
termios.h | ||
tlb.h | ||
topology.h | ||
uaccess.h | ||
unaligned.h | ||
vmlinux.lds.h | ||
xor.h |