linux/arch/arm64/mm
Mark Rutland a3bba370c2 arm64: drop unnecessary cache+tlb maintenance
In paging_init, we call flush_cache_all, but this is backed by Set/Way
operations which may not achieve anything in the presence of cache line
migration and/or system caches. If the caches are already in an
inconsistent state at this point, there is nothing we can do (short of
flushing the entire physical address space by VA) to empty architected
and system caches. As such, flush_cache_all only serves to mask other
potential bugs. Hence, this patch removes the boot-time call to
flush_cache_all.

Immediately after the cache maintenance we flush the TLBs, but this is
also unnecessary. Before enabling the MMU, the TLBs are invalidated, and
thus are initially clean. When changing the contents of active tables
(e.g. in fixup_executable() for DEBUG_RODATA) we perform the required
TLB maintenance following the update, and therefore no additional
maintenance is required to ensure the new table entries are in effect.
Since activating the MMU we will not have modified system register
fields permitted to be cached in a TLB, and therefore do not need
maintenance for any cached system register fields. Hence, the TLB flush
is unnecessary.

Shortly after the unnecessary TLB flush, we update TTBR0 to point to an
empty zero page rather than the idmap, and flush the TLBs. This
maintenance is necessary to remove the global idmap entries from the
TLBs (as they would conflict with userspace mappings), and is retained.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Acked-by: Steve Capper <steve.capper@linaro.org>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-01-28 14:26:48 +00:00
..
cache.S arm64: compat: align cacheflush syscall with arch/arm 2014-12-01 13:31:12 +00:00
context.c arm64: Process management 2012-09-17 13:41:58 +01:00
copypage.c arm64: export __cpu_{clear,copy}_user_page functions 2014-07-08 17:30:51 +01:00
dma-mapping.c arm64: Combine coherent and non-coherent swiotlb dma_ops 2015-01-23 16:43:55 +00:00
dump.c arm64: mm: dump: add missing includes 2015-01-23 14:14:02 +00:00
extable.c arm64: MMU fault handling and page table management 2012-09-17 13:41:57 +01:00
fault.c arm64: move to ESR_ELx macros 2015-01-15 12:24:15 +00:00
flush.c arm64: mm: enable RCU fast_gup 2014-10-09 22:26:01 -04:00
hugetlbpage.c hugetlb: restrict hugepage_migration_support() to x86_64 2014-06-04 16:53:51 -07:00
init.c arm64: Fix overlapping VA allocations 2015-01-23 14:13:14 +00:00
ioremap.c arm64: add ioremap physical address information 2015-01-23 15:29:06 +00:00
Makefile arm64: add support to dump the kernel page tables 2014-11-26 17:19:18 +00:00
mm.h arm64: add better page protections to arm64 2015-01-22 14:54:29 +00:00
mmap.c arm64/mm: Remove hack in mmap randomize layout 2014-11-18 16:58:15 +00:00
mmu.c arm64: drop unnecessary cache+tlb maintenance 2015-01-28 14:26:48 +00:00
pageattr.c arm64: pageattr: Correctly adjust unaligned start addresses 2014-09-12 16:34:50 +01:00
pgd.c arm64: pgalloc: consistently use PGALLOC_GFP 2014-11-20 12:05:18 +00:00
proc-macros.S arm64: mm: use ubfm for dcache_line_size 2014-01-22 16:23:58 +00:00
proc.S arm64: kernel: remove ARM64_CPU_SUSPEND config option 2015-01-27 11:35:33 +00:00