forked from Minki/linux
e2ae634014
1037 Commits
Author | SHA1 | Message | Date | |
---|---|---|---|---|
Linus Torvalds
|
e2ae634014 |
RISC-V Patches for the 5.11 Merge Window, Part 1
We have a handful of new kernel features for 5.11: * Support for the contiguous memory allocator. * Support for IRQ Time Accounting * Support for stack tracing * Support for strict /dev/mem * Support for kernel section protection I'm being a bit conservative on the cutoff for this round due to the timing, so this is all the new development I'm going to take for this cycle (even if some of it probably normally would have been OK). There are, however, some fixes on the list that I will likely be sending along either later this week or early next week. There is one issue in here: one of my test configurations (PREEMPT{,_DEBUG}=y) fails to boot on QEMU 5.0.0 (from April) as of the .text.init alignment patch. With any luck we'll sort out the issue, but given how many bugs get fixed all over the place and how unrelated those features seem my guess is that we're just running into something that's been lurking for a while and has already been fixed in the newer QEMU (though I wouldn't be surprised if it's one of these implicit assumptions we have in the boot flow). If it was hardware I'd be strongly inclined to look more closely, but given that users can upgrade their simulators I'm less worried about it. There are two merge conflicts, both in build files. They're both a bit clunky: arch/riscv/Kconfig is out of order (I have a script that's supposed to keep them in order, I'll fix it) and lib/Makefile is out of order (though GENERIC_LIB here doesn't mean quite what it does above). -----BEGIN PGP SIGNATURE----- iQJHBAABCgAxFiEEKzw3R0RoQ7JKlDp6LhMZ81+7GIkFAl/cHO4THHBhbG1lckBk YWJiZWx0LmNvbQAKCRAuExnzX7sYiTlmD/4uDyNHBM1XH/XD4fSEwTYJvGLqt/Jo vtrGR/fm0SlQFUKCcywSzxcVAeGn56CACbEIDYLuL4xXRJmbwEuaRrHVx2sEhS9p pNhy+wus/SgDz5EUAawMyR2AEWgzl77hY5T/+AAo4yv65SGGBfsIdz5noIVwGNqW r0g5cw2O99z0vwu1aSrK4isWHconG9MfQnnVyepPSh67pyWS4aUCr1K3vLiqD2dE XcgtwdcgzUIY5aEoJNrWo5qTrcaG8m6MRNCDAKJ6MKdDA2wdGIN868G0wQnoURRm Y+yW7w3P20kM0b87zH50jujTWg38NBKOfaXb0mAfawZMapL60veTVmvs2kNtFXCy F6JWRkgTiRnGY72FtRR0igWXT5M7fz0EiLFXLMItGcgj79TUget4l/3sRMN47S/O cA/WiwptJH3mh8IkL6z5ZxWEThdOrbFt8F1T+Gyq/ayblcPnJaLn/wrWoeOwviWR fvEC7smuF5SBTbWZK5tBOP21Nvhb7bfr49Sgr8Tvdjl15tz97qK+2tsLXwkBoQnJ wU45jcXfzr5wgiGBOQANRite5bLsJ0TuOrTgA5gsGpv+JSDGbpcJbm0833x00nX/ 3GsW5xr+vsLCvljgPAtKsyDNRlGQu908Gxrat2+s8u92bLr1bwn30uKL5h6i/n1w QgWATuPPGXZZdw== =GWIH -----END PGP SIGNATURE----- Merge tag 'riscv-for-linus-5.11-mw0' of git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux Pull RISC-V updates from Palmer Dabbelt: "We have a handful of new kernel features for 5.11: - Support for the contiguous memory allocator. - Support for IRQ Time Accounting - Support for stack tracing - Support for strict /dev/mem - Support for kernel section protection I'm being a bit conservative on the cutoff for this round due to the timing, so this is all the new development I'm going to take for this cycle (even if some of it probably normally would have been OK). There are, however, some fixes on the list that I will likely be sending along either later this week or early next week. There is one issue in here: one of my test configurations (PREEMPT{,_DEBUG}=y) fails to boot on QEMU 5.0.0 (from April) as of the .text.init alignment patch. With any luck we'll sort out the issue, but given how many bugs get fixed all over the place and how unrelated those features seem my guess is that we're just running into something that's been lurking for a while and has already been fixed in the newer QEMU (though I wouldn't be surprised if it's one of these implicit assumptions we have in the boot flow). If it was hardware I'd be strongly inclined to look more closely, but given that users can upgrade their simulators I'm less worried about it" * tag 'riscv-for-linus-5.11-mw0' of git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux: arm64: Use the generic devmem_is_allowed() arm: Use the generic devmem_is_allowed() RISC-V: Use the new generic devmem_is_allowed() lib: Add a generic version of devmem_is_allowed() riscv: Fixed kernel test robot warning riscv: kernel: Drop unused clean rule riscv: provide memmove implementation RISC-V: Move dynamic relocation section under __init RISC-V: Protect all kernel sections including init early RISC-V: Align the .init.text section RISC-V: Initialize SBI early riscv: Enable ARCH_STACKWALK riscv: Make stack walk callback consistent with generic code riscv: Cleanup stacktrace riscv: Add HAVE_IRQ_TIME_ACCOUNTING riscv: Enable CMA support riscv: Ignore Image.* and loader.bin riscv: Clean up boot dir riscv: Fix compressed Image formats build RISC-V: Add kernel image sections to the resource tree |
||
Linus Torvalds
|
ac73e3dc8a |
Merge branch 'akpm' (patches from Andrew)
Merge misc updates from Andrew Morton: - a few random little subsystems - almost all of the MM patches which are staged ahead of linux-next material. I'll trickle to post-linux-next work in as the dependents get merged up. Subsystems affected by this patch series: kthread, kbuild, ide, ntfs, ocfs2, arch, and mm (slab-generic, slab, slub, dax, debug, pagecache, gup, swap, shmem, memcg, pagemap, mremap, hmm, vmalloc, documentation, kasan, pagealloc, memory-failure, hugetlb, vmscan, z3fold, compaction, oom-kill, migration, cma, page-poison, userfaultfd, zswap, zsmalloc, uaccess, zram, and cleanups). * emailed patches from Andrew Morton <akpm@linux-foundation.org>: (200 commits) mm: cleanup kstrto*() usage mm: fix fall-through warnings for Clang mm: slub: convert sysfs sprintf family to sysfs_emit/sysfs_emit_at mm: shmem: convert shmem_enabled_show to use sysfs_emit_at mm:backing-dev: use sysfs_emit in macro defining functions mm: huge_memory: convert remaining use of sprintf to sysfs_emit and neatening mm: use sysfs_emit for struct kobject * uses mm: fix kernel-doc markups zram: break the strict dependency from lzo zram: add stat to gather incompressible pages since zram set up zram: support page writeback mm/process_vm_access: remove redundant initialization of iov_r mm/zsmalloc.c: rework the list_add code in insert_zspage() mm/zswap: move to use crypto_acomp API for hardware acceleration mm/zswap: fix passing zero to 'PTR_ERR' warning mm/zswap: make struct kernel_param_ops definitions const userfaultfd/selftests: hint the test runner on required privilege userfaultfd/selftests: fix retval check for userfaultfd_open() userfaultfd/selftests: always dump something in modes userfaultfd: selftests: make __{s,u}64 format specifiers portable ... |
||
Mike Rapoport
|
32a0de886e |
arch, mm: make kernel_page_present() always available
For architectures that enable ARCH_HAS_SET_MEMORY having the ability to verify that a page is mapped in the kernel direct map can be useful regardless of hibernation. Add RISC-V implementation of kernel_page_present(), update its forward declarations and stubs to be a part of set_memory API and remove ugly ifdefery in inlcude/linux/mm.h around current declarations of kernel_page_present(). Link: https://lkml.kernel.org/r/20201109192128.960-5-rppt@kernel.org Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Albert Ou <aou@eecs.berkeley.edu> Cc: Andy Lutomirski <luto@kernel.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christian Borntraeger <borntraeger@de.ibm.com> Cc: Christoph Lameter <cl@linux.com> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: David Hildenbrand <david@redhat.com> Cc: David Rientjes <rientjes@google.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: "Edgecombe, Rick P" <rick.p.edgecombe@intel.com> Cc: Heiko Carstens <hca@linux.ibm.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Len Brown <len.brown@intel.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Pavel Machek <pavel@ucw.cz> Cc: Pekka Enberg <penberg@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Will Deacon <will@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> |
||
Mike Rapoport
|
5d6ad668f3 |
arch, mm: restore dependency of __kernel_map_pages() on DEBUG_PAGEALLOC
The design of DEBUG_PAGEALLOC presumes that __kernel_map_pages() must never fail. With this assumption is wouldn't be safe to allow general usage of this function. Moreover, some architectures that implement __kernel_map_pages() have this function guarded by #ifdef DEBUG_PAGEALLOC and some refuse to map/unmap pages when page allocation debugging is disabled at runtime. As all the users of __kernel_map_pages() were converted to use debug_pagealloc_map_pages() it is safe to make it available only when DEBUG_PAGEALLOC is set. Link: https://lkml.kernel.org/r/20201109192128.960-4-rppt@kernel.org Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Acked-by: David Hildenbrand <david@redhat.com> Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Albert Ou <aou@eecs.berkeley.edu> Cc: Andy Lutomirski <luto@kernel.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christian Borntraeger <borntraeger@de.ibm.com> Cc: Christoph Lameter <cl@linux.com> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: David Rientjes <rientjes@google.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: "Edgecombe, Rick P" <rick.p.edgecombe@intel.com> Cc: Heiko Carstens <hca@linux.ibm.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Len Brown <len.brown@intel.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Pavel Machek <pavel@ucw.cz> Cc: Pekka Enberg <penberg@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Will Deacon <will@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> |
||
Mike Rapoport
|
4f5b0c1789 |
arm, arm64: move free_unused_memmap() to generic mm
ARM and ARM64 free unused parts of the memory map just before the initialization of the page allocator. To allow holes in the memory map both architectures overload pfn_valid() and define HAVE_ARCH_PFN_VALID. Allowing holes in the memory map for FLATMEM may be useful for small machines, such as ARC and m68k and will enable those architectures to cease using DISCONTIGMEM and still support more than one memory bank. Move the functions that free unused memory map to generic mm and enable them in case HAVE_ARCH_PFN_VALID=y. Link: https://lkml.kernel.org/r/20201101170454.9567-10-rppt@kernel.org Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> [arm64] Cc: Alexey Dobriyan <adobriyan@gmail.com> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Greg Ungerer <gerg@linux-m68k.org> Cc: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Matt Turner <mattst88@gmail.com> Cc: Meelis Roos <mroos@linux.ee> Cc: Michael Schmitz <schmitzmic@gmail.com> Cc: Russell King <linux@armlinux.org.uk> Cc: Tony Luck <tony.luck@intel.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Will Deacon <will@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> |
||
Palmer Dabbelt
|
7d95a88f92
|
Add and use a generic version of devmem_is_allowed()
As part of adding STRICT_DEVMEM support to the RISC-V port, Zong provided an implementation of devmem_is_allowed() that's exactly the same as the version in a handful of other ports. Rather than duplicate code, I've put a generic version of this in lib/ and used it for the RISC-V port. * palmer/generic-devmem: arm64: Use the generic devmem_is_allowed() arm: Use the generic devmem_is_allowed() RISC-V: Use the new generic devmem_is_allowed() lib: Add a generic version of devmem_is_allowed() |
||
Palmer Dabbelt
|
6585bd8274
|
arm64: Use the generic devmem_is_allowed()
I recently copied this into lib/ for use by the RISC-V port. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Luis Chamberlain <mcgrof@kernel.org> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com> |
||
Catalin Marinas
|
d889797530 |
Merge remote-tracking branch 'arm64/for-next/fixes' into for-next/core
* arm64/for-next/fixes: (26 commits) arm64: mte: fix prctl(PR_GET_TAGGED_ADDR_CTRL) if TCF0=NONE arm64: mte: Fix typo in macro definition arm64: entry: fix EL1 debug transitions arm64: entry: fix NMI {user, kernel}->kernel transitions arm64: entry: fix non-NMI kernel<->kernel transitions arm64: ptrace: prepare for EL1 irq/rcu tracking arm64: entry: fix non-NMI user<->kernel transitions arm64: entry: move el1 irq/nmi logic to C arm64: entry: prepare ret_to_user for function call arm64: entry: move enter_from_user_mode to entry-common.c arm64: entry: mark entry code as noinstr arm64: mark idle code as noinstr arm64: syscall: exit userspace before unmasking exceptions arm64: pgtable: Ensure dirty bit is preserved across pte_wrprotect() arm64: pgtable: Fix pte_accessible() ACPI/IORT: Fix doc warnings in iort.c arm64/fpsimd: add <asm/insn.h> to <asm/kprobes.h> to fix fpsimd build arm64: cpu_errata: Apply Erratum 845719 to KRYO2XX Silver arm64: proton-pack: Add KRYO2XX silver CPUs to spectre-v2 safe-list arm64: kpti: Add KRYO2XX gold/silver CPU cores to kpti safelist ... # Conflicts: # arch/arm64/include/asm/exception.h # arch/arm64/kernel/sdei.c |
||
Catalin Marinas
|
ba4259a6f8 |
Merge branch 'for-next/misc' into for-next/core
* for-next/misc: : Miscellaneous patches arm64: vmlinux.lds.S: Drop redundant *.init.rodata.* kasan: arm64: set TCR_EL1.TBID1 when enabled arm64: mte: optimize asynchronous tag check fault flag check arm64/mm: add fallback option to allocate virtually contiguous memory arm64/smp: Drop the macro S(x,s) arm64: consistently use reserved_pg_dir arm64: kprobes: Remove redundant kprobe_step_ctx # Conflicts: # arch/arm64/kernel/vmlinux.lds.S |
||
Catalin Marinas
|
e0f7a8d5e8 |
Merge branch 'for-next/uaccess' into for-next/core
* for-next/uaccess: : uaccess routines clean-up and set_fs() removal arm64: mark __system_matches_cap as __maybe_unused arm64: uaccess: remove vestigal UAO support arm64: uaccess: remove redundant PAN toggling arm64: uaccess: remove addr_limit_user_check() arm64: uaccess: remove set_fs() arm64: uaccess cleanup macro naming arm64: uaccess: split user/kernel routines arm64: uaccess: refactor __{get,put}_user arm64: uaccess: simplify __copy_user_flushcache() arm64: uaccess: rename privileged uaccess routines arm64: sdei: explicitly simulate PAN/UAO entry arm64: sdei: move uaccess logic to arch/arm64/ arm64: head.S: always initialize PSTATE arm64: head.S: cleanup SCTLR_ELx initialization arm64: head.S: rename el2_setup -> init_kernel_el arm64: add C wrappers for SET_PSTATE_*() arm64: ensure ERET from kthread is illegal |
||
Catalin Marinas
|
3c09ec59cd |
Merge branches 'for-next/kvm-build-fix', 'for-next/va-refactor', 'for-next/lto', 'for-next/mem-hotplug', 'for-next/cppc-ffh', 'for-next/pad-image-header', 'for-next/zone-dma-default-32-bit', 'for-next/signal-tag-bits' and 'for-next/cmdline-extended' into for-next/core
* for-next/kvm-build-fix: : Fix KVM build issues with 64K pages KVM: arm64: Fix build error in user_mem_abort() * for-next/va-refactor: : VA layout changes arm64: mm: don't assume struct page is always 64 bytes Documentation/arm64: fix RST layout of memory.rst arm64: mm: tidy up top of kernel VA space arm64: mm: make vmemmap region a projection of the linear region arm64: mm: extend linear region for 52-bit VA configurations * for-next/lto: : Upgrade READ_ONCE() to RCpc acquire on arm64 with LTO arm64: lto: Strengthen READ_ONCE() to acquire when CONFIG_LTO=y arm64: alternatives: Remove READ_ONCE() usage during patch operation arm64: cpufeatures: Add capability for LDAPR instruction arm64: alternatives: Split up alternative.h arm64: uaccess: move uao_* alternatives to asm-uaccess.h * for-next/mem-hotplug: : Memory hotplug improvements arm64/mm/hotplug: Ensure early memory sections are all online arm64/mm/hotplug: Enable MEM_OFFLINE event handling arm64/mm/hotplug: Register boot memory hot remove notifier earlier arm64: mm: account for hotplug memory when randomizing the linear region * for-next/cppc-ffh: : Add CPPC FFH support using arm64 AMU counters arm64: abort counter_read_on_cpu() when irqs_disabled() arm64: implement CPPC FFH support using AMUs arm64: split counter validation function arm64: wrap and generalise counter read functions * for-next/pad-image-header: : Pad Image header to 64KB and unmap it arm64: head: tidy up the Image header definition arm64/head: avoid symbol names pointing into first 64 KB of kernel image arm64: omit [_text, _stext) from permanent kernel mapping * for-next/zone-dma-default-32-bit: : Default to 32-bit wide ZONE_DMA (previously reduced to 1GB for RPi4) of: unittest: Fix build on architectures without CONFIG_OF_ADDRESS mm: Remove examples from enum zone_type comment arm64: mm: Set ZONE_DMA size based on early IORT scan arm64: mm: Set ZONE_DMA size based on devicetree's dma-ranges of: unittest: Add test for of_dma_get_max_cpu_address() of/address: Introduce of_dma_get_max_cpu_address() arm64: mm: Move zone_dma_bits initialization into zone_sizes_init() arm64: mm: Move reserve_crashkernel() into mem_init() arm64: Force NO_BLOCK_MAPPINGS if crashkernel reservation is required arm64: Ignore any DMA offsets in the max_zone_phys() calculation * for-next/signal-tag-bits: : Expose the FAR_EL1 tag bits in siginfo arm64: expose FAR_EL1 tag bits in siginfo signal: define the SA_EXPOSE_TAGBITS bit in sa_flags signal: define the SA_UNSUPPORTED bit in sa_flags arch: provide better documentation for the arch-specific SA_* flags signal: clear non-uapi flag bits when passing/returning sa_flags arch: move SA_* definitions to generic headers parisc: start using signal-defs.h parisc: Drop parisc special case for __sighandler_t * for-next/cmdline-extended: : Add support for CONFIG_CMDLINE_EXTENDED arm64: Extend the kernel command line from the bootloader arm64: kaslr: Refactor early init command line parsing |
||
Mark Rutland
|
3d2403fd10 |
arm64: uaccess: remove set_fs()
Now that the uaccess primitives dont take addr_limit into account, we have no need to manipulate this via set_fs() and get_fs(). Remove support for these, along with some infrastructure this renders redundant. We no longer need to flip UAO to access kernel memory under KERNEL_DS, and head.S unconditionally clears UAO for all kernel configurations via an ERET in init_kernel_el. Thus, we don't need to dynamically flip UAO, nor do we need to context-switch it. However, we still need to adjust PAN during SDEI entry. Masking of __user pointers no longer needs to use the dynamic value of addr_limit, and can use a constant derived from the maximum possible userspace task size. A new TASK_SIZE_MAX constant is introduced for this, which is also used by core code. In configurations supporting 52-bit VAs, this may include a region of unusable VA space above a 48-bit TTBR0 limit, but never includes any portion of TTBR1. Note that TASK_SIZE_MAX is an exclusive limit, while USER_DS and KERNEL_DS were inclusive limits, and is converted to a mask by subtracting one. As the SDEI entry code repurposes the otherwise unnecessary pt_regs::orig_addr_limit field to store the TTBR1 of the interrupted context, for now we rename that to pt_regs::sdei_ttbr1. In future we can consider factoring that out. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: James Morse <james.morse@arm.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201202131558.39270-10-mark.rutland@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> |
||
Mark Rutland
|
2ffac9e3fd |
arm64: head.S: cleanup SCTLR_ELx initialization
Let's make SCTLR_ELx initialization a bit clearer by using meaningful names for the initialization values, following the same scheme for SCTLR_EL1 and SCTLR_EL2. These definitions will be used more widely in subsequent patches. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Christoph Hellwig <hch@lst.de> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201113124937.20574-5-mark.rutland@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> |
||
Mark Rutland
|
2a9b3e6ac6 |
arm64: entry: fix EL1 debug transitions
In debug_exception_enter() and debug_exception_exit() we trace hardirqs on/off while RCU isn't guaranteed to be watching, and we don't save and restore the hardirq state, and so may return with this having changed. Handle this appropriately with new entry/exit helpers which do the bare minimum to ensure this is appropriately maintained, without marking debug exceptions as NMIs. These are placed in entry-common.c with the other entry/exit helpers. In future we'll want to reconsider whether some debug exceptions should be NMIs, but this will require a significant refactoring, and for now this should prevent issues with lockdep and RCU. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marins <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201130115950.22492-12-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org> |
||
Mark Rutland
|
23529049c6 |
arm64: entry: fix non-NMI user<->kernel transitions
When built with PROVE_LOCKING, NO_HZ_FULL, and CONTEXT_TRACKING_FORCE will WARN() at boot time that interrupts are enabled when we call context_tracking_user_enter(), despite the DAIF flags indicating that IRQs are masked. The problem is that we're not tracking IRQ flag changes accurately, and so lockdep believes interrupts are enabled when they are not (and vice-versa). We can shuffle things so to make this more accurate. For kernel->user transitions there are a number of constraints we need to consider: 1) When we call __context_tracking_user_enter() HW IRQs must be disabled and lockdep must be up-to-date with this. 2) Userspace should be treated as having IRQs enabled from the PoV of both lockdep and tracing. 3) As context_tracking_user_enter() stops RCU from watching, we cannot use RCU after calling it. 4) IRQ flag tracing and lockdep have state that must be manipulated before RCU is disabled. ... with similar constraints applying for user->kernel transitions, with the ordering reversed. The generic entry code has enter_from_user_mode() and exit_to_user_mode() helpers to handle this. We can't use those directly, so we add arm64 copies for now (without the instrumentation markers which aren't used on arm64). These replace the existing user_exit() and user_exit_irqoff() calls spread throughout handlers, and the exception unmasking is left as-is. Note that: * The accounting for debug exceptions from userspace now happens in el0_dbg() and ret_to_user(), so this is removed from debug_exception_enter() and debug_exception_exit(). As user_exit_irqoff() wakes RCU, the userspace-specific check is removed. * The accounting for syscalls now happens in el0_svc(), el0_svc_compat(), and ret_to_user(), so this is removed from el0_svc_common(). This does not adversely affect the workaround for erratum 1463225, as this does not depend on any of the state tracking. * In ret_to_user() we mask interrupts with local_daif_mask(), and so we need to inform lockdep and tracing. Here a trace_hardirqs_off() is sufficient and safe as we have not yet exited kernel context and RCU is usable. * As PROVE_LOCKING selects TRACE_IRQFLAGS, the ifdeferry in entry.S only needs to check for the latter. * EL0 SError handling will be dealt with in a subsequent patch, as this needs to be treated as an NMI. Prior to this patch, booting an appropriately-configured kernel would result in spats as below: | DEBUG_LOCKS_WARN_ON(lockdep_hardirqs_enabled()) | WARNING: CPU: 2 PID: 1 at kernel/locking/lockdep.c:5280 check_flags.part.54+0x1dc/0x1f0 | Modules linked in: | CPU: 2 PID: 1 Comm: init Not tainted 5.10.0-rc3 #3 | Hardware name: linux,dummy-virt (DT) | pstate: 804003c5 (Nzcv DAIF +PAN -UAO -TCO BTYPE=--) | pc : check_flags.part.54+0x1dc/0x1f0 | lr : check_flags.part.54+0x1dc/0x1f0 | sp : ffff80001003bd80 | x29: ffff80001003bd80 x28: ffff66ce801e0000 | x27: 00000000ffffffff x26: 00000000000003c0 | x25: 0000000000000000 x24: ffffc31842527258 | x23: ffffc31842491368 x22: ffffc3184282d000 | x21: 0000000000000000 x20: 0000000000000001 | x19: ffffc318432ce000 x18: 0080000000000000 | x17: 0000000000000000 x16: ffffc31840f18a78 | x15: 0000000000000001 x14: ffffc3184285c810 | x13: 0000000000000001 x12: 0000000000000000 | x11: ffffc318415857a0 x10: ffffc318406614c0 | x9 : ffffc318415857a0 x8 : ffffc31841f1d000 | x7 : 647261685f706564 x6 : ffffc3183ff7c66c | x5 : ffff66ce801e0000 x4 : 0000000000000000 | x3 : ffffc3183fe00000 x2 : ffffc31841500000 | x1 : e956dc24146b3500 x0 : 0000000000000000 | Call trace: | check_flags.part.54+0x1dc/0x1f0 | lock_is_held_type+0x10c/0x188 | rcu_read_lock_sched_held+0x70/0x98 | __context_tracking_enter+0x310/0x350 | context_tracking_enter.part.3+0x5c/0xc8 | context_tracking_user_enter+0x6c/0x80 | finish_ret_to_user+0x2c/0x13cr Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201130115950.22492-8-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org> |
||
Peter Collingbourne
|
49b3cf035e |
kasan: arm64: set TCR_EL1.TBID1 when enabled
On hardware supporting pointer authentication, we previously ended up enabling TBI on instruction accesses when tag-based ASAN was enabled, but this was costing us 8 bits of PAC entropy, which was unnecessary since tag-based ASAN does not require TBI on instruction accesses. Get them back by setting TCR_EL1.TBID1. Signed-off-by: Peter Collingbourne <pcc@google.com> Reviewed-by: Andrey Konovalov <andreyknvl@google.com> Link: https://linux-review.googlesource.com/id/I3dded7824be2e70ea64df0aabab9598d5aebfcc4 Link: https://lore.kernel.org/r/20f64e26fc8a1309caa446fffcb1b4e2fe9e229f.1605952129.git.pcc@google.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> |
||
Peter Collingbourne
|
dceec3ff78 |
arm64: expose FAR_EL1 tag bits in siginfo
The kernel currently clears the tag bits (i.e. bits 56-63) in the fault address exposed via siginfo.si_addr and sigcontext.fault_address. However, the tag bits may be needed by tools in order to accurately diagnose memory errors, such as HWASan [1] or future tools based on the Memory Tagging Extension (MTE). Expose these bits via the arch_untagged_si_addr mechanism, so that they are only exposed to signal handlers with the SA_EXPOSE_TAGBITS flag set. [1] http://clang.llvm.org/docs/HardwareAssistedAddressSanitizerDesign.html Signed-off-by: Peter Collingbourne <pcc@google.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://linux-review.googlesource.com/id/Ia8876bad8c798e0a32df7c2ce1256c4771c81446 Link: https://lore.kernel.org/r/0010296597784267472fa13b39f8238d87a72cf8.1605904350.git.pcc@google.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> |
||
Ard Biesheuvel
|
2b8652936f |
arm64: mm: Set ZONE_DMA size based on early IORT scan
We recently introduced a 1 GB sized ZONE_DMA to cater for platforms incorporating masters that can address less than 32 bits of DMA, in particular the Raspberry Pi 4, which has 4 or 8 GB of DRAM, but has peripherals that can only address up to 1 GB (and its PCIe host bridge can only access the bottom 3 GB) Instructing the DMA layer about these limitations is straight-forward, even though we had to fix some issues regarding memory limits set in the IORT for named components, and regarding the handling of ACPI _DMA methods. However, the DMA layer also needs to be able to allocate memory that is guaranteed to meet those DMA constraints, for bounce buffering as well as allocating the backing for consistent mappings. This is why the 1 GB ZONE_DMA was introduced recently. Unfortunately, it turns out the having a 1 GB ZONE_DMA as well as a ZONE_DMA32 causes problems with kdump, and potentially in other places where allocations cannot cross zone boundaries. Therefore, we should avoid having two separate DMA zones when possible. So let's do an early scan of the IORT, and only create the ZONE_DMA if we encounter any devices that need it. This puts the burden on the firmware to describe such limitations in the IORT, which may be redundant (and less precise) if _DMA methods are also being provided. However, it should be noted that this situation is highly unusual for arm64 ACPI machines. Also, the DMA subsystem still gives precedence to the _DMA method if implemented, and so we will not lose the ability to perform streaming DMA outside the ZONE_DMA if the _DMA method permits it. [nsaenz: unified implementation with DT's counterpart] Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de> Tested-by: Jeremy Linton <jeremy.linton@arm.com> Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by: Hanjun Guo <guohanjun@huawei.com> Cc: Jeremy Linton <jeremy.linton@arm.com> Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Cc: Nicolas Saenz Julienne <nsaenzjulienne@suse.de> Cc: Rob Herring <robh+dt@kernel.org> Cc: Christoph Hellwig <hch@lst.de> Cc: Robin Murphy <robin.murphy@arm.com> Cc: Hanjun Guo <guohanjun@huawei.com> Cc: Sudeep Holla <sudeep.holla@arm.com> Cc: Anshuman Khandual <anshuman.khandual@arm.com> Link: https://lore.kernel.org/r/20201119175400.9995-7-nsaenzjulienne@suse.de Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> |
||
Nicolas Saenz Julienne
|
8424ecdde7 |
arm64: mm: Set ZONE_DMA size based on devicetree's dma-ranges
We recently introduced a 1 GB sized ZONE_DMA to cater for platforms incorporating masters that can address less than 32 bits of DMA, in particular the Raspberry Pi 4, which has 4 or 8 GB of DRAM, but has peripherals that can only address up to 1 GB (and its PCIe host bridge can only access the bottom 3 GB) The DMA layer also needs to be able to allocate memory that is guaranteed to meet those DMA constraints, for bounce buffering as well as allocating the backing for consistent mappings. This is why the 1 GB ZONE_DMA was introduced recently. Unfortunately, it turns out the having a 1 GB ZONE_DMA as well as a ZONE_DMA32 causes problems with kdump, and potentially in other places where allocations cannot cross zone boundaries. Therefore, we should avoid having two separate DMA zones when possible. So, with the help of of_dma_get_max_cpu_address() get the topmost physical address accessible to all DMA masters in system and use that information to fine-tune ZONE_DMA's size. In the absence of addressing limited masters ZONE_DMA will span the whole 32-bit address space, otherwise, in the case of the Raspberry Pi 4 it'll only span the 30-bit address space, and have ZONE_DMA32 cover the rest of the 32-bit address space. Signed-off-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de> Link: https://lore.kernel.org/r/20201119175400.9995-6-nsaenzjulienne@suse.de Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> |
||
Nicolas Saenz Julienne
|
9804f8c69b |
arm64: mm: Move zone_dma_bits initialization into zone_sizes_init()
zone_dma_bits's initialization happens earlier that it's actually needed, in arm64_memblock_init(). So move it into the more suitable zone_sizes_init(). Signed-off-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de> Tested-by: Jeremy Linton <jeremy.linton@arm.com> Link: https://lore.kernel.org/r/20201119175400.9995-3-nsaenzjulienne@suse.de Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> |
||
Nicolas Saenz Julienne
|
0a30c53573 |
arm64: mm: Move reserve_crashkernel() into mem_init()
crashkernel might reserve memory located in ZONE_DMA. We plan to delay ZONE_DMA's initialization after unflattening the devicetree and ACPI's boot table initialization, so move it later in the boot process. Specifically into bootmem_init() since request_standard_resources() depends on it. Signed-off-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de> Tested-by: Jeremy Linton <jeremy.linton@arm.com> Link: https://lore.kernel.org/r/20201119175400.9995-2-nsaenzjulienne@suse.de Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> |
||
Catalin Marinas
|
2687275a58 |
arm64: Force NO_BLOCK_MAPPINGS if crashkernel reservation is required
mem_init() currently relies on knowing the boundaries of the crashkernel reservation to map such region with page granularity for later unmapping via set_memory_valid(..., 0). If the crashkernel reservation is deferred, such boundaries are not known when the linear mapping is created. Simply parse the command line for "crashkernel" and, if found, create the linear map with NO_BLOCK_MAPPINGS. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Tested-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de> Reviewed-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de> Acked-by: James Morse <james.morse@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Nicolas Saenz Julienne <nsaenzjulienne@suse.de> Link: https://lore.kernel.org/r/20201119175556.18681-1-catalin.marinas@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> |
||
Catalin Marinas
|
791ab8b2e3 |
arm64: Ignore any DMA offsets in the max_zone_phys() calculation
Currently, the kernel assumes that if RAM starts above 32-bit (or zone_bits), there is still a ZONE_DMA/DMA32 at the bottom of the RAM and such constrained devices have a hardwired DMA offset. In practice, we haven't noticed any such hardware so let's assume that we can expand ZONE_DMA32 to the available memory if no RAM below 4GB. Similarly, ZONE_DMA is expanded to the 4GB limit if no RAM addressable by zone_bits. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Tested-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de> Reviewed-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de> Cc: Nicolas Saenz Julienne <nsaenzjulienne@suse.de> Cc: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/20201118185809.1078362-1-catalin.marinas@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> |
||
Sudarshan Rajagopalan
|
9f84f39f55 |
arm64/mm: add fallback option to allocate virtually contiguous memory
When section mappings are enabled, we allocate vmemmap pages from physically continuous memory of size PMD_SIZE using vmemmap_alloc_block_buf(). Section mappings are good to reduce TLB pressure. But when system is highly fragmented and memory blocks are being hot-added at runtime, its possible that such physically continuous memory allocations can fail. Rather than failing the memory hot-add procedure, add a fallback option to allocate vmemmap pages from discontinuous pages using vmemmap_populate_basepages(). Signed-off-by: Sudarshan Rajagopalan <sudaraja@codeaurora.org> Reviewed-by: Gavin Shan <gshan@redhat.com> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Acked-by: Will Deacon <will@kernel.org> Cc: Will Deacon <will@kernel.org> Cc: Anshuman Khandual <anshuman.khandual@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Logan Gunthorpe <logang@deltatee.com> Cc: David Hildenbrand <david@redhat.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Steven Price <steven.price@arm.com> Link: https://lore.kernel.org/r/d6c06f2ef39bbe6c715b2f6db76eb16155fdcee6.1602722808.git.sudaraja@codeaurora.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> |
||
Ard Biesheuvel
|
e2a073dde9 |
arm64: omit [_text, _stext) from permanent kernel mapping
In a previous patch, we increased the size of the EFI PE/COFF header to 64 KB, which resulted in the _stext symbol to appear at a fixed offset of 64 KB into the image. Since 64 KB is also the largest page size we support, this completely removes the need to map the first 64 KB of the kernel image, given that it only contains the arm64 Image header and the EFI header, neither of which we ever access again after booting the kernel. More importantly, we should avoid an executable mapping of non-executable and not entirely predictable data, to deal with the unlikely event that we inadvertently emitted something that looks like an opcode that could be used as a gadget for speculative execution. So let's limit the kernel mapping of .text to the [_stext, _etext) region, which matches the view of generic code (such as kallsyms) when it reasons about the boundaries of the kernel's .text section. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201117124729.12642-2-ardb@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> |
||
Anshuman Khandual
|
58284a901b |
arm64/mm: Validate hotplug range before creating linear mapping
During memory hotplug process, the linear mapping should not be created for
a given memory range if that would fall outside the maximum allowed linear
range. Else it might cause memory corruption in the kernel virtual space.
Maximum linear mapping region is [PAGE_OFFSET..(PAGE_END -1)] accommodating
both its ends but excluding PAGE_END. Max physical range that can be mapped
inside this linear mapping range, must also be derived from its end points.
This ensures that arch_add_memory() validates memory hot add range for its
potential linear mapping requirements, before creating it with
__create_pgd_mapping().
Fixes:
|
||
Ard Biesheuvel
|
c1090bb10d |
arm64: mm: don't assume struct page is always 64 bytes
Commit |
||
Anshuman Khandual
|
fdd99a4103 |
arm64/mm/hotplug: Ensure early memory sections are all online
This adds a validation function that scans the entire boot memory and makes sure that all early memory sections are online. This check is essential for the memory notifier to work properly, as it cannot prevent any boot memory from offlining, if all sections are not online to begin with. Although the boot section scanning is selectively enabled with DEBUG_VM. Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Steve Capper <steve.capper@arm.com> Cc: Mark Brown <broonie@kernel.org> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Link: https://lore.kernel.org/r/1604896137-16644-4-git-send-email-anshuman.khandual@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> |
||
Anshuman Khandual
|
9fb3d4a303 |
arm64/mm/hotplug: Enable MEM_OFFLINE event handling
This enables MEM_OFFLINE memory event handling. It will help intercept any possible error condition such as if boot memory some how still got offlined even after an explicit notifier failure, potentially by a future change in generic hot plug framework. This would help detect such scenarios and help debug further. While here, also call out the first section being attempted for offline or got offlined. Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Reviewed-by: Gavin Shan <gshan@redhat.com> Cc: Will Deacon <will@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Steve Capper <steve.capper@arm.com> Cc: Mark Brown <broonie@kernel.org> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Link: https://lore.kernel.org/r/1604896137-16644-3-git-send-email-anshuman.khandual@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> |
||
Anshuman Khandual
|
cb45babe1b |
arm64/mm/hotplug: Register boot memory hot remove notifier earlier
This moves memory notifier registration earlier in the boot process from device_initcall() to early_initcall() which will help in guarding against potential early boot memory offline requests. Even though there should not be any actual offlinig requests till memory block devices are initialized with memory_dev_init() but then generic init sequence might just change in future. Hence an early registration for the memory event notifier would be helpful. While here, just skip the registration if CONFIG_MEMORY_HOTREMOVE is not enabled and also call out when memory notifier registration fails. Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Reviewed-by: Gavin Shan <gshan@redhat.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Steve Capper <steve.capper@arm.com> Cc: Mark Brown <broonie@kernel.org> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Link: https://lore.kernel.org/r/1604896137-16644-2-git-send-email-anshuman.khandual@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> |
||
Ard Biesheuvel
|
97d6786e06 |
arm64: mm: account for hotplug memory when randomizing the linear region
As a hardening measure, we currently randomize the placement of physical memory inside the linear region when KASLR is in effect. Since the random offset at which to place the available physical memory inside the linear region is chosen early at boot, it is based on the memblock description of memory, which does not cover hotplug memory. The consequence of this is that the randomization offset may be chosen such that any hotplugged memory located above memblock_end_of_DRAM() that appears later is pushed off the end of the linear region, where it cannot be accessed. So let's limit this randomization of the linear region to ensure that this can no longer happen, by using the CPU's addressable PA range instead. As it is guaranteed that no hotpluggable memory will appear that falls outside of that range, we can safely put this PA range sized window anywhere in the linear region. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Cc: Anshuman Khandual <anshuman.khandual@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Steven Price <steven.price@arm.com> Cc: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/20201014081857.3288-1-ardb@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> |
||
Mark Rutland
|
833be850f1 |
arm64: consistently use reserved_pg_dir
Depending on configuration options and specific code paths, we either use the empty_zero_page or the configuration-dependent reserved_ttbr0 as a reserved value for TTBR{0,1}_EL1. To simplify this code, let's always allocate and use the same reserved_pg_dir, replacing reserved_ttbr0. Note that this is allocated (and hence pre-zeroed), and is also marked as read-only in the kernel Image mapping. Keeping this separate from the empty_zero_page potentially helps with robustness as the empty_zero_page is used in a number of cases where a failure to map it read-only could allow it to become corrupted. The (presently unused) swapper_pg_end symbol is also removed, and comments are added wherever we rely on the offsets between the pre-allocated pg_dirs to keep these cases easily identifiable. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201103102229.8542-1-mark.rutland@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> |
||
Ard Biesheuvel
|
8c96400d6a |
arm64: mm: make vmemmap region a projection of the linear region
Now that we have reverted the introduction of the vmemmap struct page pointer and the separate physvirt_offset, we can simplify things further, and place the vmemmap region in the VA space in such a way that virtual to page translations and vice versa can be implemented using a single arithmetic shift. One happy coincidence resulting from this is that the 48-bit/4k and 52-bit/64k configurations (which are assumed to be the two most prevalent) end up with the same placement of the vmemmap region. In a subsequent patch, we will take advantage of this, and unify the memory maps even more. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Steve Capper <steve.capper@arm.com> Link: https://lore.kernel.org/r/20201008153602.9467-4-ardb@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> |
||
Ard Biesheuvel
|
f4693c2716 |
arm64: mm: extend linear region for 52-bit VA configurations
For historical reasons, the arm64 kernel VA space is configured as two equally sized halves, i.e., on a 48-bit VA build, the VA space is split into a 47-bit vmalloc region and a 47-bit linear region. When support for 52-bit virtual addressing was added, this equal split was kept, resulting in a substantial waste of virtual address space in the linear region: 48-bit VA 52-bit VA 0xffff_ffff_ffff_ffff +-------------+ +-------------+ | vmalloc | | vmalloc | 0xffff_8000_0000_0000 +-------------+ _PAGE_END(48) +-------------+ | linear | : : 0xffff_0000_0000_0000 +-------------+ : : : : : : : : : : : : : : : : : currently : : unusable : : : : : : unused : : by : : : : : : : : hardware : : : : : : : 0xfff8_0000_0000_0000 : : _PAGE_END(52) +-------------+ : : | | : : | | : : | | : : | | : : | | : unusable : | | : : | linear | : by : | | : : | region | : hardware : | | : : | | : : | | : : | | : : | | : : | | : : | | 0xfff0_0000_0000_0000 +-------------+ PAGE_OFFSET +-------------+ As illustrated above, the 52-bit VA kernel uses 47 bits for the vmalloc space (as before), to ensure that a single 64k granule kernel image can support any 64k granule capable system, regardless of whether it supports the 52-bit virtual addressing extension. However, due to the fact that the VA space is still split in equal halves, the linear region is only 2^51 bytes in size, wasting almost half of the 52-bit VA space. Let's fix this, by abandoning the equal split, and simply assigning all VA space outside of the vmalloc region to the linear region. The KASAN shadow region is reconfigured so that it ends at the start of the vmalloc region, and grows downwards. That way, the arrangement of the vmalloc space (which contains kernel mappings, modules, BPF region, the vmemmap array etc) is identical between non-KASAN and KASAN builds, which aids debugging. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Steve Capper <steve.capper@arm.com> Link: https://lore.kernel.org/r/20201008153602.9467-3-ardb@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> |
||
Rob Herring
|
96d389ca10 |
arm64: Add workaround for Arm Cortex-A77 erratum 1508412
On Cortex-A77 r0p0 and r1p0, a sequence of a non-cacheable or device load and a store exclusive or PAR_EL1 read can cause a deadlock. The workaround requires a DMB SY before and after a PAR_EL1 register read. In addition, it's possible an interrupt (doing a device read) or KVM guest exit could be taken between the DMB and PAR read, so we also need a DMB before returning from interrupt and before returning to a guest. A deadlock is still possible with the workaround as KVM guests must also have the workaround. IOW, a malicious guest can deadlock an affected systems. This workaround also depends on a firmware counterpart to enable the h/w to insert DMB SY after load and store exclusive instructions. See the errata document SDEN-1152370 v10 [1] for more information. [1] https://static.docs.arm.com/101992/0010/Arm_Cortex_A77_MP074_Software_Developer_Errata_Notice_v10.pdf Signed-off-by: Rob Herring <robh@kernel.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Marc Zyngier <maz@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Julien Thierry <julien.thierry.kdev@gmail.com> Cc: kvmarm@lists.cs.columbia.edu Link: https://lore.kernel.org/r/20201028182839.166037-2-robh@kernel.org Signed-off-by: Will Deacon <will@kernel.org> |
||
Joe Perches
|
33def8498f |
treewide: Convert macro and uses of __section(foo) to __section("foo")
Use a more generic form for __section that requires quotes to avoid complications with clang and gcc differences. Remove the quote operator # from compiler_attributes.h __section macro. Convert all unquoted __section(foo) uses to quoted __section("foo"). Also convert __attribute__((section("foo"))) uses to __section("foo") even if the __attribute__ has multiple list entry forms. Conversion done using the script at: https://lore.kernel.org/lkml/75393e5ddc272dc7403de74d645e6c6e0f4e70eb.camel@perches.com/2-convert_section.pl Signed-off-by: Joe Perches <joe@perches.com> Reviewed-by: Nick Desaulniers <ndesaulniers@gooogle.com> Reviewed-by: Miguel Ojeda <ojeda@kernel.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> |
||
Linus Torvalds
|
032c7ed958 |
More arm64 updates for 5.10
- Improve performance of Spectre-v2 mitigation on Falkor CPUs (if you're lucky enough to have one) - Select HAVE_MOVE_PMD. This has been shown to improve mremap() performance, which is used heavily by the Android runtime GC, and it seems we forgot to enable this upstream back in 2018. - Ensure linker flags are consistent between LLVM and BFD - Fix stale comment in Spectre mitigation rework - Fix broken copyright header - Fix KASLR randomisation of the linear map - Prevent arm64-specific prctl()s from compat tasks (return -EINVAL) -----BEGIN PGP SIGNATURE----- iQFEBAABCgAuFiEEPxTL6PPUbjXGY88ct6xw3ITBYzQFAl+QEPAQHHdpbGxAa2Vy bmVsLm9yZwAKCRC3rHDchMFjNE8jB/0YNYKO9mis/Xn5KcOCwlg4dbc2uVBknZXD f7otEJ6SOax2HcWz8qJlrJ+qbGFawPIqFBUAM0vU1VmoyctIoKRFTA8ACfWfWtnK QBfHrcxtJCh/GGq+E1IyuqWzCjppeY/7gYVdgi1xDEZRSaLz53MC1GVBwKBtu5cf X2Bfm8d9+PSSnmKfpO65wSCTvN3PQX1SNEHwwTWFZQx0p7GcQK1DdwoobM6dRnVy +e984ske+2a+nTrkhLSyQIgsfHuLB4pD6XdM/UOThnfdNxdQ0dUGn375sXP+b4dW 7MTH9HP/dXIymTcuErMXOHJXLk/zUiUBaOxkmOxdvrhQd0uFNFIc =e9p9 -----END PGP SIGNATURE----- Merge tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux Pull more arm64 updates from Will Deacon: "A small selection of further arm64 fixes and updates. Most of these are fixes that came in during the merge window, with the exception of the HAVE_MOVE_PMD mremap() speed-up which we discussed back in 2018 and somehow forgot to enable upstream. - Improve performance of Spectre-v2 mitigation on Falkor CPUs (if you're lucky enough to have one) - Select HAVE_MOVE_PMD. This has been shown to improve mremap() performance, which is used heavily by the Android runtime GC, and it seems we forgot to enable this upstream back in 2018. - Ensure linker flags are consistent between LLVM and BFD - Fix stale comment in Spectre mitigation rework - Fix broken copyright header - Fix KASLR randomisation of the linear map - Prevent arm64-specific prctl()s from compat tasks (return -EINVAL)" Link: https://lore.kernel.org/kvmarm/20181108181201.88826-3-joelaf@google.com/ * tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: arm64: proton-pack: Update comment to reflect new function name arm64: spectre-v2: Favour CPU-specific mitigation at EL2 arm64: link with -z norelro regardless of CONFIG_RELOCATABLE arm64: Fix a broken copyright header in gen_vdso_offsets.sh arm64: mremap speedup - Enable HAVE_MOVE_PMD arm64: mm: use single quantity to represent the PA to VA translation arm64: reject prctl(PR_PAC_RESET_KEYS) on compat tasks |
||
Linus Torvalds
|
5a32c3413d |
dma-mapping updates for 5.10
- rework the non-coherent DMA allocator - move private definitions out of <linux/dma-mapping.h> - lower CMA_ALIGNMENT (Paul Cercueil) - remove the omap1 dma address translation in favor of the common code - make dma-direct aware of multiple dma offset ranges (Jim Quinlan) - support per-node DMA CMA areas (Barry Song) - increase the default seg boundary limit (Nicolin Chen) - misc fixes (Robin Murphy, Thomas Tai, Xu Wang) - various cleanups -----BEGIN PGP SIGNATURE----- iQI/BAABCgApFiEEgdbnc3r/njty3Iq9D55TZVIEUYMFAl+IiPwLHGhjaEBsc3Qu ZGUACgkQD55TZVIEUYPKEQ//TM8vxjucnRl/pklpMin49dJorwiVvROLhQqLmdxw 286ZKpVzYYAPc7LnNqwIBugnFZiXuHu8xPKQkIiOa2OtNDTwhKNoBxOAmOJaV6DD 8JfEtZYeX5mKJ/Nqd2iSkIqOvCwZ9Wzii+aytJ2U88wezQr1fnyF4X49MegETEey FHWreSaRWZKa0MMRu9AQ0QxmoNTHAQUNaPc0PeqEtPULybfkGOGw4/ghSB7WcKrA gtKTuooNOSpVEHkTas2TMpcBp6lxtOjFqKzVN0ml+/nqq5NeTSDx91VOCX/6Cj76 mXIg+s7fbACTk/BmkkwAkd0QEw4fo4tyD6Bep/5QNhvEoAriTuSRbhvLdOwFz0EF vhkF0Rer6umdhSK7nPd7SBqn8kAnP4vBbdmB68+nc3lmkqysLyE4VkgkdH/IYYQI 6TJ0oilXWFmU6DT5Rm4FBqCvfcEfU2dUIHJr5wZHqrF2kLzoZ+mpg42fADoG4GuI D/oOsz7soeaRe3eYfWybC0omGR6YYPozZJ9lsfftcElmwSsFrmPsbO1DM5IBkj1B gItmEbOB9ZK3RhIK55T/3u1UWY3Uc/RVr+kchWvADGrWnRQnW0kxYIqDgiOytLFi JZNH8uHpJIwzoJAv6XXSPyEUBwXTG+zK37Ce769HGbUEaUrE71MxBbQAQsK8mDpg 7fM= =Bkf/ -----END PGP SIGNATURE----- Merge tag 'dma-mapping-5.10' of git://git.infradead.org/users/hch/dma-mapping Pull dma-mapping updates from Christoph Hellwig: - rework the non-coherent DMA allocator - move private definitions out of <linux/dma-mapping.h> - lower CMA_ALIGNMENT (Paul Cercueil) - remove the omap1 dma address translation in favor of the common code - make dma-direct aware of multiple dma offset ranges (Jim Quinlan) - support per-node DMA CMA areas (Barry Song) - increase the default seg boundary limit (Nicolin Chen) - misc fixes (Robin Murphy, Thomas Tai, Xu Wang) - various cleanups * tag 'dma-mapping-5.10' of git://git.infradead.org/users/hch/dma-mapping: (63 commits) ARM/ixp4xx: add a missing include of dma-map-ops.h dma-direct: simplify the DMA_ATTR_NO_KERNEL_MAPPING handling dma-direct: factor out a dma_direct_alloc_from_pool helper dma-direct check for highmem pages in dma_direct_alloc_pages dma-mapping: merge <linux/dma-noncoherent.h> into <linux/dma-map-ops.h> dma-mapping: move large parts of <linux/dma-direct.h> to kernel/dma dma-mapping: move dma-debug.h to kernel/dma/ dma-mapping: remove <asm/dma-contiguous.h> dma-mapping: merge <linux/dma-contiguous.h> into <linux/dma-map-ops.h> dma-contiguous: remove dma_contiguous_set_default dma-contiguous: remove dev_set_cma_area dma-contiguous: remove dma_declare_contiguous dma-mapping: split <linux/dma-mapping.h> cma: decrease CMA_ALIGNMENT lower limit to 2 firewire-ohci: use dma_alloc_pages dma-iommu: implement ->alloc_noncoherent dma-mapping: add new {alloc,free}_noncoherent dma_map_ops methods dma-mapping: add a new dma_alloc_pages API dma-mapping: remove dma_cache_sync 53c700: convert to dma_alloc_noncoherent ... |
||
Ard Biesheuvel
|
7bc1a0f9e1 |
arm64: mm: use single quantity to represent the PA to VA translation
On arm64, the global variable memstart_addr represents the physical
address of PAGE_OFFSET, and so physical to virtual translations or
vice versa used to come down to simple additions or subtractions
involving the values of PAGE_OFFSET and memstart_addr.
When support for 52-bit virtual addressing was introduced, we had to
deal with PAGE_OFFSET potentially being outside of the region that
can be covered by the virtual range (as the 52-bit VA capable build
needs to be able to run on systems that are only 48-bit VA capable),
and for this reason, another translation was introduced, and recorded
in the global variable physvirt_offset.
However, if we go back to the original definition of memstart_addr,
i.e., the physical address of PAGE_OFFSET, it turns out that there is
no need for two separate translations: instead, we can simply subtract
the size of the unaddressable VA space from memstart_addr to make the
available physical memory appear in the 48-bit addressable VA region.
This simplifies things, but also fixes a bug on KASLR builds, which
may update memstart_addr later on in arm64_memblock_init(), but fails
to update vmemmap and physvirt_offset accordingly.
Fixes:
|
||
Mike Rapoport
|
cc6de16805 |
memblock: use separate iterators for memory and reserved regions
for_each_memblock() is used to iterate over memblock.memory in a few places that use data from memblock_region rather than the memory ranges. Introduce separate for_each_mem_region() and for_each_reserved_mem_region() to improve encapsulation of memblock internals from its users. Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Reviewed-by: Baoquan He <bhe@redhat.com> Acked-by: Ingo Molnar <mingo@kernel.org> [x86] Acked-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de> [MIPS] Acked-by: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com> [.clang-format] Cc: Andy Lutomirski <luto@kernel.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Daniel Axtens <dja@axtens.net> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Emil Renner Berthing <kernel@esmil.dk> Cc: Hari Bathini <hbathini@linux.ibm.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Simek <monstr@monstr.eu> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Stafford Horne <shorne@gmail.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will@kernel.org> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Link: https://lkml.kernel.org/r/20200818151634.14343-18-rppt@kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> |
||
Mike Rapoport
|
b10d6bca87 |
arch, drivers: replace for_each_membock() with for_each_mem_range()
There are several occurrences of the following pattern: for_each_memblock(memory, reg) { start = __pfn_to_phys(memblock_region_memory_base_pfn(reg); end = __pfn_to_phys(memblock_region_memory_end_pfn(reg)); /* do something with start and end */ } Using for_each_mem_range() iterator is more appropriate in such cases and allows simpler and cleaner code. [akpm@linux-foundation.org: fix arch/arm/mm/pmsa-v7.c build] [rppt@linux.ibm.com: mips: fix cavium-octeon build caused by memblock refactoring] Link: http://lkml.kernel.org/r/20200827124549.GD167163@linux.ibm.com Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: Baoquan He <bhe@redhat.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Daniel Axtens <dja@axtens.net> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Emil Renner Berthing <kernel@esmil.dk> Cc: Hari Bathini <hbathini@linux.ibm.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Simek <monstr@monstr.eu> Cc: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Stafford Horne <shorne@gmail.com> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will@kernel.org> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Link: https://lkml.kernel.org/r/20200818151634.14343-13-rppt@kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> |
||
Mike Rapoport
|
c9118e6c37 |
arch, mm: replace for_each_memblock() with for_each_mem_pfn_range()
There are several occurrences of the following pattern: for_each_memblock(memory, reg) { start_pfn = memblock_region_memory_base_pfn(reg); end_pfn = memblock_region_memory_end_pfn(reg); /* do something with start_pfn and end_pfn */ } Rather than iterate over all memblock.memory regions and each time query for their start and end PFNs, use for_each_mem_pfn_range() iterator to get simpler and clearer code. Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Reviewed-by: Baoquan He <bhe@redhat.com> Acked-by: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com> [.clang-format] Cc: Andy Lutomirski <luto@kernel.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Daniel Axtens <dja@axtens.net> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Emil Renner Berthing <kernel@esmil.dk> Cc: Hari Bathini <hbathini@linux.ibm.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Simek <monstr@monstr.eu> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Stafford Horne <shorne@gmail.com> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will@kernel.org> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Link: https://lkml.kernel.org/r/20200818151634.14343-12-rppt@kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> |
||
Mike Rapoport
|
ab8f21aa8b |
arm64: numa: simplify dummy_numa_init()
dummy_numa_init() loops over memblock.memory and passes nid=0 to numa_add_memblk() which essentially wraps memblock_set_node(). However, memblock_set_node() can cope with entire memory span itself, so the loop over memblock.memory regions is redundant. Using a single call to memblock_set_node() rather than a loop also fixes an issue with a buggy ACPI firmware in which the SRAT table covers some but not all of the memory in the EFI memory map. Jonathan Cameron says: This issue can be easily triggered by having an SRAT table which fails to cover all elements of the EFI memory map. This firmware error is detected and a warning printed. e.g. "NUMA: Warning: invalid memblk node 64 [mem 0x240000000-0x27fffffff]" At that point we fall back to dummy_numa_init(). However, the failed ACPI init has left us with our memblocks all broken up as we split them when trying to assign them to NUMA nodes. We then iterate over the memblocks and add them to node 0. numa_add_memblk() calls memblock_set_node() which merges regions that were previously split up during the earlier attempt to add them to different nodes during parsing of SRAT. This means elements are moved in the memblock array and we can end up in a different memblock after the call to numa_add_memblk(). Result is: Unable to handle kernel paging request at virtual address 0000000000003a40 Mem abort info: ESR = 0x96000004 EC = 0x25: DABT (current EL), IL = 32 bits SET = 0, FnV = 0 EA = 0, S1PTW = 0 Data abort info: ISV = 0, ISS = 0x00000004 CM = 0, WnR = 0 [0000000000003a40] user address but active_mm is swapper Internal error: Oops: 96000004 [#1] PREEMPT SMP ... Call trace: sparse_init_nid+0x5c/0x2b0 sparse_init+0x138/0x170 bootmem_init+0x80/0xe0 setup_arch+0x2a0/0x5fc start_kernel+0x8c/0x648 Replace the loop with a single call to memblock_set_node() to the entire memory. Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Acked-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Baoquan He <bhe@redhat.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Christoph Hellwig <hch@lst.de> Cc: Daniel Axtens <dja@axtens.net> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Emil Renner Berthing <kernel@esmil.dk> Cc: Hari Bathini <hbathini@linux.ibm.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Simek <monstr@monstr.eu> Cc: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Stafford Horne <shorne@gmail.com> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will@kernel.org> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Link: https://lkml.kernel.org/r/20200818151634.14343-5-rppt@kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> |
||
Linus Torvalds
|
34eb62d868 |
Orphan link sections were a long-standing source of obscure bugs,
because the heuristics that various linkers & compilers use to handle them (include these bits into the output image vs discarding them silently) are both highly idiosyncratic and also version dependent. Instead of this historically problematic mess, this tree by Kees Cook (et al) adds build time asserts and build time warnings if there's any orphan section in the kernel or if a section is not sized as expected. And because we relied on so many silent assumptions in this area, fix a metric ton of dependencies and some outright bugs related to this, before we can finally enable the checks on the x86, ARM and ARM64 platforms. Signed-off-by: Ingo Molnar <mingo@kernel.org> -----BEGIN PGP SIGNATURE----- iQJFBAABCgAvFiEEBpT5eoXrXCwVQwEKEnMQ0APhK1gFAl+Edv4RHG1pbmdvQGtl cm5lbC5vcmcACgkQEnMQ0APhK1hiKBAApdJEOaK7hMc3013DYNctklIxEPJL2mFJ 11YJRIh4pUJTF0TE+EHT/D+rSIuRsyuoSmOQBQ61/wVSnyG067GjjVJRqh/eYaJ1 fDhJi2FuHOjXl+CiN0KxzBjjp+V4NhF7jHT59tpQSvfZeg7FjteoxfztxaCp5ek3 S3wHB3CC4c4jE3lfjHem1E9/PwT4kwPYx1c3gAUdEqJdjkihjX9fWusfjLeqW6/d Y5VkApi6bL9XiZUZj5l0dEIweLJJ86+PkKJqpo3spxxEak1LSn1MEix+lcJ8e1Kg sb/bEEivDcmFlFWOJnn0QLquCR0Cx5bz1pwsL0tuf0yAd4+sXX5IMuGUysZlEdKM BHL9h5HbevGF4BScwZwZH7lyEg7q67s5KnRu4hxy0Swfcj7y0oT/9lXqpbpZ2DqO Hd+bRRQKIbqnTMp0hcit9LfpLp93vj0dBlaV5ocAJJlu62u9VnwGG5HQuZ5giLUr kA1SLw63Y1wopFRxgFyER8les7eLsu0zxHeK44rRVlVnfI99OMTOgVNicmDFy3Fm AfcnfJG0BqBEJGQz5es34uQQKKBwFPtC9NztopI62KiwOspYYZyrO1BNxdOc6DlS mIHrmO89HMXuid5eolvLaFqUWirHoWO8TlycgZxUWVHc2txVPjAEU/axouU/dSSU w/6GpzAa+7g= =fXAw -----END PGP SIGNATURE----- Merge tag 'core-build-2020-10-12' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull orphan section checking from Ingo Molnar: "Orphan link sections were a long-standing source of obscure bugs, because the heuristics that various linkers & compilers use to handle them (include these bits into the output image vs discarding them silently) are both highly idiosyncratic and also version dependent. Instead of this historically problematic mess, this tree by Kees Cook (et al) adds build time asserts and build time warnings if there's any orphan section in the kernel or if a section is not sized as expected. And because we relied on so many silent assumptions in this area, fix a metric ton of dependencies and some outright bugs related to this, before we can finally enable the checks on the x86, ARM and ARM64 platforms" * tag 'core-build-2020-10-12' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (36 commits) x86/boot/compressed: Warn on orphan section placement x86/build: Warn on orphan section placement arm/boot: Warn on orphan section placement arm/build: Warn on orphan section placement arm64/build: Warn on orphan section placement x86/boot/compressed: Add missing debugging sections to output x86/boot/compressed: Remove, discard, or assert for unwanted sections x86/boot/compressed: Reorganize zero-size section asserts x86/build: Add asserts for unwanted sections x86/build: Enforce an empty .got.plt section x86/asm: Avoid generating unused kprobe sections arm/boot: Handle all sections explicitly arm/build: Assert for unwanted sections arm/build: Add missing sections arm/build: Explicitly keep .ARM.attributes sections arm/build: Refactor linker script headers arm64/build: Assert for unwanted sections arm64/build: Add missing DWARF sections arm64/build: Use common DISCARDS in linker script arm64/build: Remove .eh_frame* sections due to unwind tables ... |
||
Christoph Hellwig
|
9f4df96b87 |
dma-mapping: merge <linux/dma-noncoherent.h> into <linux/dma-map-ops.h>
Move more nitty gritty DMA implementation details into the common internal header. Signed-off-by: Christoph Hellwig <hch@lst.de> |
||
Christoph Hellwig
|
0b1abd1fb7 |
dma-mapping: merge <linux/dma-contiguous.h> into <linux/dma-map-ops.h>
Merge dma-contiguous.h into dma-map-ops.h, after removing the comment describing the contiguous allocator into kernel/dma/contigous.c. Signed-off-by: Christoph Hellwig <hch@lst.de> |
||
Christoph Hellwig
|
0a0f0d8be7 |
dma-mapping: split <linux/dma-mapping.h>
Split out all the bits that are purely for dma_map_ops implementations and related code into a new <linux/dma-map-ops.h> header so that they don't get pulled into all the drivers. That also means the architecture specific <asm/dma-mapping.h> is not pulled in by <linux/dma-mapping.h> any more, which leads to a missing includes that were pulled in by the x86 or arm versions in a few not overly portable drivers. Signed-off-by: Christoph Hellwig <hch@lst.de> |
||
Will Deacon
|
baab853229 |
Merge branch 'for-next/mte' into for-next/core
Add userspace support for the Memory Tagging Extension introduced by Armv8.5. (Catalin Marinas and others) * for-next/mte: (30 commits) arm64: mte: Fix typo in memory tagging ABI documentation arm64: mte: Add Memory Tagging Extension documentation arm64: mte: Kconfig entry arm64: mte: Save tags when hibernating arm64: mte: Enable swap of tagged pages mm: Add arch hooks for saving/restoring tags fs: Handle intra-page faults in copy_mount_options() arm64: mte: ptrace: Add NT_ARM_TAGGED_ADDR_CTRL regset arm64: mte: ptrace: Add PTRACE_{PEEK,POKE}MTETAGS support arm64: mte: Allow {set,get}_tagged_addr_ctrl() on non-current tasks arm64: mte: Restore the GCR_EL1 register after a suspend arm64: mte: Allow user control of the generated random tags via prctl() arm64: mte: Allow user control of the tag check mode via prctl() mm: Allow arm64 mmap(PROT_MTE) on RAM-based files arm64: mte: Validate the PROT_MTE request via arch_validate_flags() mm: Introduce arch_validate_flags() arm64: mte: Add PROT_MTE support to mmap() and mprotect() mm: Introduce arch_calc_vm_flag_bits() arm64: mte: Tags-aware aware memcmp_pages() implementation arm64: Avoid unnecessary clear_user_page() indirection ... |
||
Will Deacon
|
57b8b1b435 |
Merge branches 'for-next/acpi', 'for-next/boot', 'for-next/bpf', 'for-next/cpuinfo', 'for-next/fpsimd', 'for-next/misc', 'for-next/mm', 'for-next/pci', 'for-next/perf', 'for-next/ptrauth', 'for-next/sdei', 'for-next/selftests', 'for-next/stacktrace', 'for-next/svm', 'for-next/topology', 'for-next/tpyos' and 'for-next/vdso' into for-next/core
Remove unused functions and parameters from ACPI IORT code. (Zenghui Yu via Lorenzo Pieralisi) * for-next/acpi: ACPI/IORT: Remove the unused inline functions ACPI/IORT: Drop the unused @ops of iort_add_device_replay() Remove redundant code and fix documentation of caching behaviour for the HVC_SOFT_RESTART hypercall. (Pingfan Liu) * for-next/boot: Documentation/kvm/arm: improve description of HVC_SOFT_RESTART arm64/relocate_kernel: remove redundant code Improve reporting of unexpected kernel traps due to BPF JIT failure. (Will Deacon) * for-next/bpf: arm64: Improve diagnostics when trapping BRK with FAULT_BRK_IMM Improve robustness of user-visible HWCAP strings and their corresponding numerical constants. (Anshuman Khandual) * for-next/cpuinfo: arm64/cpuinfo: Define HWCAP name arrays per their actual bit definitions Cleanups to handling of SVE and FPSIMD register state in preparation for potential future optimisation of handling across syscalls. (Julien Grall) * for-next/fpsimd: arm64/sve: Implement a helper to load SVE registers from FPSIMD state arm64/sve: Implement a helper to flush SVE registers arm64/fpsimdmacros: Allow the macro "for" to be used in more cases arm64/fpsimdmacros: Introduce a macro to update ZCR_EL1.LEN arm64/signal: Update the comment in preserve_sve_context arm64/fpsimd: Update documentation of do_sve_acc Miscellaneous changes. (Tian Tao and others) * for-next/misc: arm64/mm: return cpu_all_mask when node is NUMA_NO_NODE arm64: mm: Fix missing-prototypes in pageattr.c arm64/fpsimd: Fix missing-prototypes in fpsimd.c arm64: hibernate: Remove unused including <linux/version.h> arm64/mm: Refactor {pgd, pud, pmd, pte}_ERROR() arm64: Remove the unused include statements arm64: get rid of TEXT_OFFSET arm64: traps: Add str of description to panic() in die() Memory management updates and cleanups. (Anshuman Khandual and others) * for-next/mm: arm64: dbm: Invalidate local TLB when setting TCR_EL1.HD arm64: mm: Make flush_tlb_fix_spurious_fault() a no-op arm64/mm: Unify CONT_PMD_SHIFT arm64/mm: Unify CONT_PTE_SHIFT arm64/mm: Remove CONT_RANGE_OFFSET arm64/mm: Enable THP migration arm64/mm: Change THP helpers to comply with generic MM semantics arm64/mm/ptdump: Add address markers for BPF regions Allow prefetchable PCI BARs to be exposed to userspace using normal non-cacheable mappings. (Clint Sbisa) * for-next/pci: arm64: Enable PCI write-combine resources under sysfs Perf/PMU driver updates. (Julien Thierry and others) * for-next/perf: perf: arm-cmn: Fix conversion specifiers for node type perf: arm-cmn: Fix unsigned comparison to less than zero arm_pmu: arm64: Use NMIs for PMU arm_pmu: Introduce pmu_irq_ops KVM: arm64: pmu: Make overflow handler NMI safe arm64: perf: Defer irq_work to IPI_IRQ_WORK arm64: perf: Remove PMU locking arm64: perf: Avoid PMXEV* indirection arm64: perf: Add missing ISB in armv8pmu_enable_counter() perf: Add Arm CMN-600 PMU driver perf: Add Arm CMN-600 DT binding arm64: perf: Add support caps under sysfs drivers/perf: thunderx2_pmu: Fix memory resource error handling drivers/perf: xgene_pmu: Fix uninitialized resource struct perf: arm_dsu: Support DSU ACPI devices arm64: perf: Remove unnecessary event_idx check drivers/perf: hisi: Add missing include of linux/module.h arm64: perf: Add general hardware LLC events for PMUv3 Support for the Armv8.3 Pointer Authentication enhancements. (By Amit Daniel Kachhap) * for-next/ptrauth: arm64: kprobe: clarify the comment of steppable hint instructions arm64: kprobe: disable probe of fault prone ptrauth instruction arm64: cpufeature: Modify address authentication cpufeature to exact arm64: ptrauth: Introduce Armv8.3 pointer authentication enhancements arm64: traps: Allow force_signal_inject to pass esr error code arm64: kprobe: add checks for ARMv8.3-PAuth combined instructions Tonnes of cleanup to the SDEI driver. (Gavin Shan) * for-next/sdei: firmware: arm_sdei: Remove _sdei_event_unregister() firmware: arm_sdei: Remove _sdei_event_register() firmware: arm_sdei: Introduce sdei_do_local_call() firmware: arm_sdei: Cleanup on cross call function firmware: arm_sdei: Remove while loop in sdei_event_unregister() firmware: arm_sdei: Remove while loop in sdei_event_register() firmware: arm_sdei: Remove redundant error message in sdei_probe() firmware: arm_sdei: Remove duplicate check in sdei_get_conduit() firmware: arm_sdei: Unregister driver on error in sdei_init() firmware: arm_sdei: Avoid nested statements in sdei_init() firmware: arm_sdei: Retrieve event number from event instance firmware: arm_sdei: Common block for failing path in sdei_event_create() firmware: arm_sdei: Remove sdei_is_err() Selftests for Pointer Authentication and FPSIMD/SVE context-switching. (Mark Brown and Boyan Karatotev) * for-next/selftests: selftests: arm64: Add build and documentation for FP tests selftests: arm64: Add wrapper scripts for stress tests selftests: arm64: Add utility to set SVE vector lengths selftests: arm64: Add stress tests for FPSMID and SVE context switching selftests: arm64: Add test for the SVE ptrace interface selftests: arm64: Test case for enumeration of SVE vector lengths kselftests/arm64: add PAuth tests for single threaded consistency and differently initialized keys kselftests/arm64: add PAuth test for whether exec() changes keys kselftests/arm64: add nop checks for PAuth tests kselftests/arm64: add a basic Pointer Authentication test Implementation of ARCH_STACKWALK for unwinding. (Mark Brown) * for-next/stacktrace: arm64: Move console stack display code to stacktrace.c arm64: stacktrace: Convert to ARCH_STACKWALK arm64: stacktrace: Make stack walk callback consistent with generic code stacktrace: Remove reliable argument from arch_stack_walk() callback Support for ASID pinning, which is required when sharing page-tables with the SMMU. (Jean-Philippe Brucker) * for-next/svm: arm64: cpufeature: Export symbol read_sanitised_ftr_reg() arm64: mm: Pin down ASIDs for sharing mm with devices Rely on firmware tables for establishing CPU topology. (Valentin Schneider) * for-next/topology: arm64: topology: Stop using MPIDR for topology information Spelling fixes. (Xiaoming Ni and Yanfei Xu) * for-next/tpyos: arm64/numa: Fix a typo in comment of arm64_numa_init arm64: fix some spelling mistakes in the comments by codespell vDSO cleanups. (Will Deacon) * for-next/vdso: arm64: vdso: Fix unusual formatting in *setup_additional_pages() arm64: vdso32: Remove a bunch of #ifdef CONFIG_COMPAT_VDSO guards |
||
Will Deacon
|
6a1bdb173f |
arm64: mm: Make flush_tlb_fix_spurious_fault() a no-op
Our use of broadcast TLB maintenance means that spurious page-faults that have been handled already by another CPU do not require additional TLB maintenance. Make flush_tlb_fix_spurious_fault() a no-op and rely on the existing TLB invalidation instead. Add an explicit flush_tlb_page() when making a page dirty, as the TLB is permitted to cache the old read-only entry. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20200728092220.GA21800@willie-the-truck Signed-off-by: Will Deacon <will@kernel.org> |