forked from Minki/linux
bf72aeba2f
Some POWER5+ machines can do 64k hardware pages for normal memory but not for cache-inhibited pages. This patch lets us use 64k hardware pages for most user processes on such machines (assuming the kernel has been configured with CONFIG_PPC_64K_PAGES=y). User processes start out using 64k pages and get switched to 4k pages if they use any non-cacheable mappings. With this, we use 64k pages for the vmalloc region and 4k pages for the imalloc region. If anything creates a non-cacheable mapping in the vmalloc region, the vmalloc region will get switched to 4k pages. I don't know of any driver other than the DRM that would do this, though, and these machines don't have AGP. When a region gets switched from 64k pages to 4k pages, we do not have to clear out all the 64k HPTEs from the hash table immediately. We use the _PAGE_COMBO bit in the Linux PTE to indicate whether the page was hashed in as a 64k page or a set of 4k pages. If hash_page is trying to insert a 4k page for a Linux PTE and it sees that it has already been inserted as a 64k page, it first invalidates the 64k HPTE before inserting the 4k HPTE. The hash invalidation routines also use the _PAGE_COMBO bit, to determine whether to look for a 64k HPTE or a set of 4k HPTEs to remove. With those two changes, we can tolerate a mix of 4k and 64k HPTEs in the hash table, and they will all get removed when the address space is torn down. Signed-off-by: Paul Mackerras <paulus@samba.org> |
||
---|---|---|
.. | ||
4xx_mmu.c | ||
44x_mmu.c | ||
fault.c | ||
fsl_booke_mmu.c | ||
hash_low_32.S | ||
hash_low_64.S | ||
hash_native_64.c | ||
hash_utils_64.c | ||
hugetlbpage.c | ||
imalloc.c | ||
init_32.c | ||
init_64.c | ||
lmb.c | ||
Makefile | ||
mem.c | ||
mmap.c | ||
mmu_context_32.c | ||
mmu_context_64.c | ||
mmu_decl.h | ||
numa.c | ||
pgtable_32.c | ||
pgtable_64.c | ||
ppc_mmu_32.c | ||
slb_low.S | ||
slb.c | ||
stab.c | ||
tlb_32.c | ||
tlb_64.c |