- convert arm32 to the common dma-direct code (Arnd Bergmann, Robin Murphy,
Christoph Hellwig)
- restructure the PCIe peer to peer mapping support (Logan Gunthorpe)
- allow the IOMMU code to communicate an optional DMA mapping length
and use that in scsi and libata (John Garry)
- split the global swiotlb lock (Tianyu Lan)
- various fixes and cleanup (Chao Gao, Dan Carpenter, Dongli Zhang,
Lukas Bulwahn, Robin Murphy)
-----BEGIN PGP SIGNATURE-----
iQI/BAABCgApFiEEgdbnc3r/njty3Iq9D55TZVIEUYMFAmLuIYULHGhjaEBsc3Qu
ZGUACgkQD55TZVIEUYPS5A//Ty1ZNyXExmwZ6J6g7/oIvQlpAHilDr22mCd8tR8Y
Ne7TgLa/X+usFvJTxJfkvg/LNMDjD7qx0J/mhDGm4reOFcEL4/PBy0rDSOgnmntV
k/fPhgwnpuztiAQ+s+WkJ3pkrmG1HaEId7GGj2JaoYdas6RX2mGX7vL8uvUFepjw
lYPAqWMtJHkOfsDK0PqqyQsr7dcC6lyFLqnn/wqvHtTJeKCfGs6W/SIrlWme2SZY
3dNx84ZR1uPjaazAmtf2IWfjh/TBmd0ETRYycgUUKRP9iwsCkBQDBwsBGSIYXiWj
BUKQ5oMvjAlUGRF0jYz9e77KuedE6GxWiXNQstitBmid142M37DHA5tvZRf65MPS
THHcjTDmmoaO4YfFhhXOcFOrjG4/V8bF7fgHB6XkHDjhVVTcnIx8zuOAXIVBZvIV
VAALmamBqEfIZZrCqgr7hzFssK2bip+TIMkdoD46Wcr+D7bAlujhuzWxubn9+ulT
23v/pAvC80ut6LvKj6EA+GpRm/pejfOtEbjXPoO2hguNxvuUKvPQqNh9hy0q+v1e
8n2Y/4lhy5bv02S7wKooNkfCoV753jBY1TIru45UmEYc3EkTQPii6okYe0DvW4QX
VCnKgo156wSBfE+9eWdxCROv2SZqJFMV/wL3vw54dpJQMbDy7VkNsh4mGREdUkU1
uek=
=Bv19
-----END PGP SIGNATURE-----
Merge tag 'dma-mapping-5.20-2022-08-06' of git://git.infradead.org/users/hch/dma-mapping
Pull dma-mapping updates from Christoph Hellwig:
- convert arm32 to the common dma-direct code (Arnd Bergmann, Robin
Murphy, Christoph Hellwig)
- restructure the PCIe peer to peer mapping support (Logan Gunthorpe)
- allow the IOMMU code to communicate an optional DMA mapping length
and use that in scsi and libata (John Garry)
- split the global swiotlb lock (Tianyu Lan)
- various fixes and cleanup (Chao Gao, Dan Carpenter, Dongli Zhang,
Lukas Bulwahn, Robin Murphy)
* tag 'dma-mapping-5.20-2022-08-06' of git://git.infradead.org/users/hch/dma-mapping: (45 commits)
swiotlb: fix passing local variable to debugfs_create_ulong()
dma-mapping: reformat comment to suppress htmldoc warning
PCI/P2PDMA: Remove pci_p2pdma_[un]map_sg()
RDMA/rw: drop pci_p2pdma_[un]map_sg()
RDMA/core: introduce ib_dma_pci_p2p_dma_supported()
nvme-pci: convert to using dma_map_sgtable()
nvme-pci: check DMA ops when indicating support for PCI P2PDMA
iommu/dma: support PCI P2PDMA pages in dma-iommu map_sg
iommu: Explicitly skip bus address marked segments in __iommu_map_sg()
dma-mapping: add flags to dma_map_ops to indicate PCI P2PDMA support
dma-direct: support PCI P2PDMA pages in dma-direct map_sg
dma-mapping: allow EREMOTEIO return code for P2PDMA transfers
PCI/P2PDMA: Introduce helpers for dma_map_sg implementations
PCI/P2PDMA: Attempt to set map_type if it has not been set
lib/scatterlist: add flag for indicating P2PDMA segments in an SGL
swiotlb: clean up some coding style and minor issues
dma-mapping: update comment after dmabounce removal
scsi: sd: Add a comment about limiting max_sectors to shost optimal limit
ata: libata-scsi: cap ata_device->max_sectors according to shost->max_sectors
scsi: scsi_transport_sas: cap shost opt_sectors according to DMA optimal limit
...
Lin, Yang Shi, Anshuman Khandual and Mike Rapoport
- Some kmemleak fixes from Patrick Wang and Waiman Long
- DAMON updates from SeongJae Park
- memcg debug/visibility work from Roman Gushchin
- vmalloc speedup from Uladzislau Rezki
- more folio conversion work from Matthew Wilcox
- enhancements for coherent device memory mapping from Alex Sierra
- addition of shared pages tracking and CoW support for fsdax, from
Shiyang Ruan
- hugetlb optimizations from Mike Kravetz
- Mel Gorman has contributed some pagealloc changes to improve latency
and realtime behaviour.
- mprotect soft-dirty checking has been improved by Peter Xu
- Many other singleton patches all over the place
-----BEGIN PGP SIGNATURE-----
iHUEABYKAB0WIQTTMBEPP41GrTpTJgfdBJ7gKXxAjgUCYuravgAKCRDdBJ7gKXxA
jpqSAQDrXSdII+ht9kSHlaCVYjqRFQz/rRvURQrWQV74f6aeiAD+NHHeDPwZn11/
SPktqEUrF1pxnGQxqLh1kUFUhsVZQgE=
=w/UH
-----END PGP SIGNATURE-----
Merge tag 'mm-stable-2022-08-03' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Pull MM updates from Andrew Morton:
"Most of the MM queue. A few things are still pending.
Liam's maple tree rework didn't make it. This has resulted in a few
other minor patch series being held over for next time.
Multi-gen LRU still isn't merged as we were waiting for mapletree to
stabilize. The current plan is to merge MGLRU into -mm soon and to
later reintroduce mapletree, with a view to hopefully getting both
into 6.1-rc1.
Summary:
- The usual batches of cleanups from Baoquan He, Muchun Song, Miaohe
Lin, Yang Shi, Anshuman Khandual and Mike Rapoport
- Some kmemleak fixes from Patrick Wang and Waiman Long
- DAMON updates from SeongJae Park
- memcg debug/visibility work from Roman Gushchin
- vmalloc speedup from Uladzislau Rezki
- more folio conversion work from Matthew Wilcox
- enhancements for coherent device memory mapping from Alex Sierra
- addition of shared pages tracking and CoW support for fsdax, from
Shiyang Ruan
- hugetlb optimizations from Mike Kravetz
- Mel Gorman has contributed some pagealloc changes to improve
latency and realtime behaviour.
- mprotect soft-dirty checking has been improved by Peter Xu
- Many other singleton patches all over the place"
[ XFS merge from hell as per Darrick Wong in
https://lore.kernel.org/all/YshKnxb4VwXycPO8@magnolia/ ]
* tag 'mm-stable-2022-08-03' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm: (282 commits)
tools/testing/selftests/vm/hmm-tests.c: fix build
mm: Kconfig: fix typo
mm: memory-failure: convert to pr_fmt()
mm: use is_zone_movable_page() helper
hugetlbfs: fix inaccurate comment in hugetlbfs_statfs()
hugetlbfs: cleanup some comments in inode.c
hugetlbfs: remove unneeded header file
hugetlbfs: remove unneeded hugetlbfs_ops forward declaration
hugetlbfs: use helper macro SZ_1{K,M}
mm: cleanup is_highmem()
mm/hmm: add a test for cross device private faults
selftests: add soft-dirty into run_vmtests.sh
selftests: soft-dirty: add test for mprotect
mm/mprotect: fix soft-dirty check in can_change_pte_writable()
mm: memcontrol: fix potential oom_lock recursion deadlock
mm/gup.c: fix formatting in check_and_migrate_movable_page()
xfs: fail dax mount if reflink is enabled on a partition
mm/memcontrol.c: remove the redundant updating of stats_flush_threshold
userfaultfd: don't fail on unrecognized features
hugetlb_cgroup: fix wrong hugetlb cgroup numa stat
...
Not much this time around, the 5.20-rc1 development updates for arm are:
- add KASAN support for vmalloc space on arm
- some sparse fixes from Ben Dooks
- rework amba device handling (so device addition isn't deferred)
-----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEEuNNh8scc2k/wOAE+9OeQG+StrGQFAmLqf38ACgkQ9OeQG+St
rGSsTg/9FZQUFwSOfCDoxQx1KAlbwckBjwWAnr7er19EqF2dEOZhZcHTnVT+w3dT
o1LvWGzPWFkHCV+PWum3OA/QfkH0DaZDEG5LTKF4Y9+R3HPzHYvt58d05x7vz05k
5DEURGJvqtirGEfqXDWpNDv2H2Pac1QiDVgT3pwL4mKhN2E550BXecDHDswZsCcJ
YOsIwCNcKPxWGLC11LZYLGWiVnxxBXSWu4LVYDvUy67kmSpeA5MzB8cK+jq4D4JT
im/KXjQjLAl9FQmTeND354IBwp20pUzGcY2jbSrkYIyVzJEU1nZhu75/diB0hnZC
JyeSWFdeQy9p3O+fbUBPFi9fepQ9QOfIsljgwD+BRjWHYgK9q8+jEy7j2k/QDJdY
pGJN41KLw+qjvK3JZCXlLvbOa9+I/p2R3ryq6eHQY3eF3Yr9IC4rCqyGEyx+MriU
iupDC443by0LMFelbPm+o8HBlmwJw51r225sw1rc5QBAQ7Q/7eo7ngBZjwaqeqyh
rsMASQvflCTy8WN98Pd/4FXqdERRzi3RvAzOrtEFeFs1PIvPhKvf3FRhlgfCtQwR
8lJ6aYBoFM5yKFJYPzy6fWDtbxn68nVG+UNNV85p44HnECZS84+/sc0R8Cs0j1kc
hVCJyY1WlXU3jXZDQnfa9bCidDzhB+CTkCXlzEEkeB3rY4o9BTQ=
=1c+V
-----END PGP SIGNATURE-----
Merge tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-arm
Pull ARM updates from Russell King:
"Not much this time around, the 5.20-rc1 development updates for arm
are:
- add KASAN support for vmalloc space on arm
- some sparse fixes from Ben Dooks
- rework amba device handling (so device addition isn't deferred)"
* tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-arm:
ARM: 9220/1: amba: Remove deferred device addition
ARM: 9219/1: fix undeclared soft_restart
ARM: 9218/1: dma-mapping: fix pointer/integer warning
ARM: 9217/1: add definition of arch_irq_work_raise()
ARM: 9203/1: kconfig: fix MODULE_PLTS for KASAN with KASAN_VMALLOC
ARM: 9202/1: kasan: support CONFIG_KASAN_VMALLOC
Here is the set of SPDX comment updates for 6.0-rc1.
Nothing huge here, just a number of updated SPDX license tags and
cleanups based on the review of a number of common patterns in GPLv2
boilerplate text. Also included in here are a few other minor updates,
2 USB files, and one Documentation file update to get the SPDX lines
correct.
All of these have been in the linux-next tree for a very long time.
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-----BEGIN PGP SIGNATURE-----
iG0EABECAC0WIQT0tgzFv3jCIUoxPcsxR9QN2y37KQUCYupz3g8cZ3JlZ0Brcm9h
aC5jb20ACgkQMUfUDdst+ynPUgCgslaf2ssCgW5IeuXbhla+ZBRAzisAnjVgOvLN
4AKdqbiBNlFbCroQwmeQ
=v1sg
-----END PGP SIGNATURE-----
Merge tag 'spdx-6.0-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/spdx
Pull SPDX updates from Greg KH:
"Here is the set of SPDX comment updates for 6.0-rc1.
Nothing huge here, just a number of updated SPDX license tags and
cleanups based on the review of a number of common patterns in GPLv2
boilerplate text.
Also included in here are a few other minor updates, two USB files,
and one Documentation file update to get the SPDX lines correct.
All of these have been in the linux-next tree for a very long time"
* tag 'spdx-6.0-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/spdx: (28 commits)
Documentation: samsung-s3c24xx: Add blank line after SPDX directive
x86/crypto: Remove stray comment terminator
treewide: Replace GPLv2 boilerplate/reference with SPDX - gpl-2.0_406.RULE
treewide: Replace GPLv2 boilerplate/reference with SPDX - gpl-2.0_398.RULE
treewide: Replace GPLv2 boilerplate/reference with SPDX - gpl-2.0_391.RULE
treewide: Replace GPLv2 boilerplate/reference with SPDX - gpl-2.0_390.RULE
treewide: Replace GPLv2 boilerplate/reference with SPDX - gpl-2.0_385.RULE
treewide: Replace GPLv2 boilerplate/reference with SPDX - gpl-2.0_320.RULE
treewide: Replace GPLv2 boilerplate/reference with SPDX - gpl-2.0_319.RULE
treewide: Replace GPLv2 boilerplate/reference with SPDX - gpl-2.0_318.RULE
treewide: Replace GPLv2 boilerplate/reference with SPDX - gpl-2.0_298.RULE
treewide: Replace GPLv2 boilerplate/reference with SPDX - gpl-2.0_292.RULE
treewide: Replace GPLv2 boilerplate/reference with SPDX - gpl-2.0_179.RULE
treewide: Replace GPLv2 boilerplate/reference with SPDX - gpl-2.0_168.RULE (part 2)
treewide: Replace GPLv2 boilerplate/reference with SPDX - gpl-2.0_168.RULE (part 1)
treewide: Replace GPLv2 boilerplate/reference with SPDX - gpl-2.0_160.RULE
treewide: Replace GPLv2 boilerplate/reference with SPDX - gpl-2.0_152.RULE
treewide: Replace GPLv2 boilerplate/reference with SPDX - gpl-2.0_149.RULE
treewide: Replace GPLv2 boilerplate/reference with SPDX - gpl-2.0_147.RULE
treewide: Replace GPLv2 boilerplate/reference with SPDX - gpl-2.0_133.RULE
...
- Remove unused generic cpuidle support (replaced by PSCI version)
- Fix documentation describing the kernel virtual address space
- Handling of some new CPU errata in Arm implementations
- Rework of our exception table code in preparation for handling
machine checks (i.e. RAS errors) more gracefully
- Switch over to the generic implementation of ioremap()
- Fix lockdep tracking in NMI context
- Instrument our memory barrier macros for KCSAN
- Rework of the kPTI G->nG page-table repainting so that the MMU remains
enabled and the boot time is no longer slowed to a crawl for systems
which require the late remapping
- Enable support for direct swapping of 2MiB transparent huge-pages on
systems without MTE
- Fix handling of MTE tags with allocating new pages with HW KASAN
- Expose the SMIDR register to userspace via sysfs
- Continued rework of the stack unwinder, particularly improving the
behaviour under KASAN
- More repainting of our system register definitions to match the
architectural terminology
- Improvements to the layout of the vDSO objects
- Support for allocating additional bits of HWCAP2 and exposing
FEAT_EBF16 to userspace on CPUs that support it
- Considerable rework and optimisation of our early boot code to reduce
the need for cache maintenance and avoid jumping in and out of the
kernel when handling relocation under KASLR
- Support for disabling SVE and SME support on the kernel command-line
- Support for the Hisilicon HNS3 PMU
- Miscellanous cleanups, trivial updates and minor fixes
-----BEGIN PGP SIGNATURE-----
iQFEBAABCgAuFiEEPxTL6PPUbjXGY88ct6xw3ITBYzQFAmLeccUQHHdpbGxAa2Vy
bmVsLm9yZwAKCRC3rHDchMFjNCysB/4ml92RJLhVwRAofbtFfVgVz3JLTSsvob9x
Z7FhNDxfM/G32wKtOHU9tHkGJ+PMVWOPajukzxkMhxmilfTyHBbiisNWVRjKQxj4
wrd07DNXPIv3bi8SWzS1y2y8ZqujZWjNJlX8SUCzEoxCVtuNKwrh96kU1jUjrkFZ
kBo4E4wBWK/qW29nClGSCgIHRQNJaB/jvITlQhkqIb0pwNf3sAUzW7QoF1iTZWhs
UswcLh/zC4q79k9poegdCt8chV5OBDLtLPnMxkyQFvsLYRp3qhyCSQQY/BxvO5JS
jT9QR6d+1ewET9BFhqHlIIuOTYBCk3xn/PR9AucUl+ZBQd2tO4B1
=LVH0
-----END PGP SIGNATURE-----
Merge tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
Pull arm64 updates from Will Deacon:
"Highlights include a major rework of our kPTI page-table rewriting
code (which makes it both more maintainable and considerably faster in
the cases where it is required) as well as significant changes to our
early boot code to reduce the need for data cache maintenance and
greatly simplify the KASLR relocation dance.
Summary:
- Remove unused generic cpuidle support (replaced by PSCI version)
- Fix documentation describing the kernel virtual address space
- Handling of some new CPU errata in Arm implementations
- Rework of our exception table code in preparation for handling
machine checks (i.e. RAS errors) more gracefully
- Switch over to the generic implementation of ioremap()
- Fix lockdep tracking in NMI context
- Instrument our memory barrier macros for KCSAN
- Rework of the kPTI G->nG page-table repainting so that the MMU
remains enabled and the boot time is no longer slowed to a crawl
for systems which require the late remapping
- Enable support for direct swapping of 2MiB transparent huge-pages
on systems without MTE
- Fix handling of MTE tags with allocating new pages with HW KASAN
- Expose the SMIDR register to userspace via sysfs
- Continued rework of the stack unwinder, particularly improving the
behaviour under KASAN
- More repainting of our system register definitions to match the
architectural terminology
- Improvements to the layout of the vDSO objects
- Support for allocating additional bits of HWCAP2 and exposing
FEAT_EBF16 to userspace on CPUs that support it
- Considerable rework and optimisation of our early boot code to
reduce the need for cache maintenance and avoid jumping in and out
of the kernel when handling relocation under KASLR
- Support for disabling SVE and SME support on the kernel
command-line
- Support for the Hisilicon HNS3 PMU
- Miscellanous cleanups, trivial updates and minor fixes"
* tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (136 commits)
arm64: Delay initialisation of cpuinfo_arm64::reg_{zcr,smcr}
arm64: fix KASAN_INLINE
arm64/hwcap: Support FEAT_EBF16
arm64/cpufeature: Store elf_hwcaps as a bitmap rather than unsigned long
arm64/hwcap: Document allocation of upper bits of AT_HWCAP
arm64: enable THP_SWAP for arm64
arm64/mm: use GENMASK_ULL for TTBR_BADDR_MASK_52
arm64: errata: Remove AES hwcap for COMPAT tasks
arm64: numa: Don't check node against MAX_NUMNODES
drivers/perf: arm_spe: Fix consistency of SYS_PMSCR_EL1.CX
perf: RISC-V: Add of_node_put() when breaking out of for_each_of_cpu_node()
docs: perf: Include hns3-pmu.rst in toctree to fix 'htmldocs' WARNING
arm64: kasan: Revert "arm64: mte: reset the page tag in page->flags"
mm: kasan: Skip page unpoisoning only if __GFP_SKIP_KASAN_UNPOISON
mm: kasan: Skip unpoisoning of user pages
mm: kasan: Ensure the tags are visible before the tag in page->flags
drivers/perf: hisi: add driver for HNS3 PMU
drivers/perf: hisi: Add description for HNS3 PMU driver
drivers/perf: riscv_pmu_sbi: perf format
perf/arm-cci: Use the bitmap API to allocate bitmaps
...
Fix the use of a pointer assignment from integer where false
is being used instead of NULL. Fix the following warning by
usign NULL:
arch/arm/mm/dma-mapping.c:712:52: warning: Using plain integer as NULL pointer
Signed-off-by: Ben Dooks <ben-linux@fluff.org>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Simply make shadow of vmalloc area mapped on demand.
Since the virtual address of vmalloc for Arm is also between
MODULE_VADDR and 0x100000000 (ZONE_HIGHMEM), which means the shadow
address has already included between KASAN_SHADOW_START and
KASAN_SHADOW_END.
Thus we need to change nothing for memory map of Arm.
This can fix ARM_MODULE_PLTS with KASan, support KASan for higmem
and support CONFIG_VMAP_STACK with KASan.
Signed-off-by: Lecopzer Chen <lecopzer.chen@mediatek.com>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Commit e3217540c2 ("ARM/dma-mapping: remove dmabounce") removes the
config DMABOUNCE. A comment to the function __dma_page_cpu_to_dev() refers
to this removed config DMABOUNCE.
Remove the obsolete explanation, but keep the recommendation not to use
__dma_page_cpu_to_dev() and use dma_sync_* functions instead.
Signed-off-by: Lukas Bulwahn <lukas.bulwahn@gmail.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Christoph Hellwig <hch@lst.de>
This enables ARCH_HAS_VM_GET_PAGE_PROT on the platform and exports
standard vm_get_page_prot() implementation via DECLARE_VM_GET_PAGE_PROT,
which looks up a private and static protection_map[] array. Subsequently
all __SXXX and __PXXX macros can be dropped which are no longer needed.
Link: https://lkml.kernel.org/r/20220711070600.2378316-24-anshuman.khandual@arm.com
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Brian Cain <bcain@quicinc.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Chris Zankel <chris@zankel.net>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Dinh Nguyen <dinguyen@kernel.org>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Guo Ren <guoren@kernel.org>
Cc: Heiko Carstens <hca@linux.ibm.com>
Cc: Huacai Chen <chenhuacai@kernel.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>
Cc: Jeff Dike <jdike@addtoit.com>
Cc: Jonas Bonn <jonas@southpole.se>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Michal Simek <monstr@monstr.eu>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Rich Felker <dalias@libc.org>
Cc: Sam Ravnborg <sam@ravnborg.org>
Cc: Stafford Horne <shorne@gmail.com>
Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vasily Gorbik <gor@linux.ibm.com>
Cc: Vineet Gupta <vgupta@kernel.org>
Cc: WANG Xuerui <kernel@xen0n.name>
Cc: Will Deacon <will@kernel.org>
Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
- quieten the spectre-bhb prints
- mark flattened device tree sections as shareable
- remove some obsolete CPU domain code and help text
- fix thumb unaligned access abort emulation
- fix amba_device_add() refcount underflow
- fix literal placement
-----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEEuNNh8scc2k/wOAE+9OeQG+StrGQFAmLQD6IACgkQ9OeQG+St
rGQI2Q//dYODqrByRkrS8uKpriP0CPZcEB3o1ZWioa/CpM+4aoPptqb9nOa2c0Xx
LvBhL6k+fBJ8WCWlAJ50aIbcOwqSSzgf4EQMJo6Cc33vQi2alBLhL66ULF1rpQYm
2V7YRwtid/pRyP8LXaWdy/RbRvcXn5xF7cCvlMrOoZQ7nyloIT3nf0XydgoVn+xo
lbfFdfPT18HQtUOiCSOGdI0pV1QEakMB4eJ9haZHak4l/XXBTRIg8PAxk1C8DfxZ
aLfeI6VfCKLJ5q273QP9hK+Wu+L1xYK5siad1gm2AAfXW+OsNH7rR0VCMCSb6t+h
bN9hvf0iNo/MtTH3PPbOK8kL48PSjARXAJp8MTcJLGAT5BozsvKkpCoT1C/7/uBv
wPeTaAVMe69gNeK6TCkbnIL8C5p9errBsRRQ6YW7u4M1Z1DFFj5YdSmB+RSvKEr6
otvY68BPzZSZxzGXo1sFV+LLjZPUV6RvpOUY2d/aTTTyK8HHaHHLXIV1KyP+c9he
HS+gxDciQe795wOQd5aRqEjGZbdh0aq6OjckAc36YeQwGnk0xXYG8irCPpTB58+q
X7rme9U1zm3WUwbbA5ndrOjGvk6Jz6cjsAWWIX5l7i1EtcB8qzo/+/1KaWdfL5Bw
2MAH0djeFAJIh7Yq9R+JMrF0POUpdCp6c+xAyyTj8/1JFkm84XU=
=g/12
-----END PGP SIGNATURE-----
Merge tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-arm
Pull ARM fixes from Russell King:
- quieten the spectre-bhb prints
- mark flattened device tree sections as shareable
- remove some obsolete CPU domain code and help text
- fix thumb unaligned access abort emulation
- fix amba_device_add() refcount underflow
- fix literal placement
* tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-arm:
ARM: 9208/1: entry: add .ltorg directive to keep literals in range
ARM: 9207/1: amba: fix refcount underflow if amba_device_add() fails
ARM: 9214/1: alignment: advance IT state after emulating Thumb instruction
ARM: 9213/1: Print message about disabled Spectre workarounds only once
ARM: 9212/1: domain: Modify Kconfig help text
ARM: 9211/1: domain: drop modify_domain()
ARM: 9210/1: Mark the FDT_FIXED sections as shareable
ARM: 9209/1: Spectre-BHB: avoid pr_info() every time a CPU comes out of idle
The dma_sync_* operations are now the only difference between the
coherent and non-coherent IOMMU ops. Some minor tweaks to make those
safe for coherent devices with minimal overhead, and we can condense
down to a single set of DMA ops.
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Tested-by: Marc Zyngier <maz@kernel.org>
Merge the coherent and non-coherent callbacks down to a single
implementation each, relying on the generic dev->dma_coherent
flag at the points where the difference matters.
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Tested-by: Marc Zyngier <maz@kernel.org>
When an IOMMU is present, we trust that it should be capable
of remapping any physical memory, and since the device masks
represent the input (virtual) addresses to the IOMMU it makes
no sense to validate them against physical PFNs anyway.
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Tested-by: Marc Zyngier <maz@kernel.org>
Use dma-direct unconditionally on arm. It has already been used for
some time for LPAE and nommu configurations.
This mostly changes the streaming mapping implementation and the (simple)
coherent allocator for device that are DMA coherent. The existing
complex allocator for uncached mappings for non-coherent devices is still
used as is using the arch_dma_alloc/arch_dma_free hooks.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Andre Przywara <andre.przywara@arm.com> [highbank]
Tested-by: Marc Zyngier <maz@kernel.org>
Use the helpers as expected by the dma-direct code in the old arm
dma-mapping code to ease a gradual switch to the common DMA code.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Arnd Bergmann <arnd@arndb.de>
Tested-by: Marc Zyngier <maz@kernel.org>
With the dmabounce removal these aren't used outside of dma-mapping.c,
so mark them static. Move the dma_map_ops declarations down a bit
to avoid lots of forward declarations.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Arnd Bergmann <arnd@arndb.de>
Tested-by: Marc Zyngier <maz@kernel.org>
After emulating a misaligned load or store issued in Thumb mode, we have
to advance the IT state by hand, or it will get out of sync with the
actual instruction stream, which means we'll end up applying the wrong
condition code to subsequent instructions. This might corrupt the
program state rather catastrophically.
So borrow the it_advance() helper from the probing code, and use it on
CPSR if the emulated instruction is Thumb.
Cc: <stable@vger.kernel.org>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Print the message about disabled Spectre workarounds only once. The
message is printed each time CPU goes out from idling state on NVIDIA
Tegra boards, causing storm in KMSG that makes system unusable.
Cc: stable@vger.kernel.org
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
After the removal of set_fs() the reference to set_fs() is stale.
Alter the helptext to reflect what the config option really does.
Fixes: 8ac6f5d7f8 ("ARM: 9113/1: uaccess: remove set_fs() implementation")
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
commit 7a1be318f5 ("ARM: 9012/1: move device tree mapping out of linear
region") use FDT_FIXED_BASE to map the whole FDT_FIXED_SIZE memory area
which contains fdt. But it only reserves the exact physical memory that
fdt occupied. Unfortunately, this mapping is non-shareable. An illegal or
speculative read access can bring the RAM content from non-fdt zone into
cache, PIPT makes it to be hit by subsequently read access through
shareable mapping(such as linear mapping), and the cache consistency
between cores is lost due to non-shareable property.
|<---------FDT_FIXED_SIZE------>|
| |
-------------------------------
| <non-fdt> | <fdt> | <non-fdt> |
-------------------------------
1. CoreA read <non-fdt> through MT_ROM mapping, the old data is loaded
into the cache.
2. CoreB write <non-fdt> to update data through linear mapping. CoreA
received the notification to invalid the corresponding cachelines, but
the property non-shareable makes it to be ignored.
3. CoreA read <non-fdt> through linear mapping, cache hit, the old data
is read.
To eliminate this risk, add a new memory type MT_MEMORY_RO. Compared to
MT_ROM, it is shareable and non-executable.
Here's an example:
list_del corruption. prev->next should be c0ecbf74, but was c08410dc
kernel BUG at lib/list_debug.c:53!
... ...
PC is at __list_del_entry_valid+0x58/0x98
LR is at __list_del_entry_valid+0x58/0x98
psr: 60000093
sp : c0ecbf30 ip : 00000000 fp : 00000001
r10: c08410d0 r9 : 00000001 r8 : c0825e0c
r7 : 20000013 r6 : c08410d0 r5 : c0ecbf74 r4 : c0ecbf74
r3 : c0825d08 r2 : 00000000 r1 : df7ce6f4 r0 : 00000044
... ...
Stack: (0xc0ecbf30 to 0xc0ecc000)
bf20: c0ecbf74 c0164fd0 c0ecbf70 c0165170
bf40: c0eca000 c0840c00 c0840c00 c0824500 c0825e0c c0189bbc c088f404 60000013
bf60: 60000013 c0e85100 000004ec 00000000 c0ebcdc0 c0ecbf74 c0ecbf74 c0825d08
... ... < next prev >
(__list_del_entry_valid) from (__list_del_entry+0xc/0x20)
(__list_del_entry) from (finish_swait+0x60/0x7c)
(finish_swait) from (rcu_gp_kthread+0x560/0xa20)
(rcu_gp_kthread) from (kthread+0x14c/0x15c)
(kthread) from (ret_from_fork+0x14/0x24)
The faulty list node to be deleted is a local variable, its address is
c0ecbf74. The dumped stack shows that 'prev' = c0ecbf74, but its value
before lib/list_debug.c:53 is c08410dc. A large amount of printing results
in swapping out the cacheline containing the old data(MT_ROM mapping is
read only, so the cacheline cannot be dirty), and the subsequent dump
operation obtains new data from the DDR.
Fixes: 7a1be318f5 ("ARM: 9012/1: move device tree mapping out of linear region")
Suggested-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Jon reports that the Spectre-BHB init code is filling up the kernel log
with spurious notifications about which mitigation has been enabled,
every time any CPU comes out of a low power state.
Given that Spectre-BHB mitigations are system wide, only a single
mitigation can be enabled, and we already print an error if two types of
CPUs coexist in a single system that require different Spectre-BHB
mitigations.
This means that the pr_info() that describes the selected mitigation
does not need to be emitted for each CPU anyway, and so we can simply
emit it only once.
In order to clarify the above in the log message, update it to describe
that the selected mitigation will be enabled on all CPUs, including ones
that are unaffected. If another CPU comes up later that is affected and
requires a different mitigation, we report an error as before.
Fixes: b9baf5c8c5 ("ARM: Spectre-BHB workaround")
Tested-by: Jon Hunter <jonathanh@nvidia.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Since the following commits,
v5.4
commit 59d3ae9a5b ("ARM: remove Intel iop33x and iop13xx support")
v5.11
commit 3e3f354bc3 ("ARM: remove ebsa110 platform")
The runtime hook arch_iounmap() on ARM is useless, kill arch_iounmap()
and __iounmap().
Cc: Russell King <linux@armlinux.org.uk>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Reviewed-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Reported-by: kernel test robot <lkp@intel.com>
Link: https://lore.kernel.org/r/20220607125027.44946-2-wangkefeng.wang@huawei.com
Signed-off-by: Will Deacon <will@kernel.org>
I observed that for each of the shared file-backed page faults, we're very
likely to retry one more time for the 1st write fault upon no page. It's
because we'll need to release the mmap lock for dirty rate limit purpose
with balance_dirty_pages_ratelimited() (in fault_dirty_shared_page()).
Then after that throttling we return VM_FAULT_RETRY.
We did that probably because VM_FAULT_RETRY is the only way we can return
to the fault handler at that time telling it we've released the mmap lock.
However that's not ideal because it's very likely the fault does not need
to be retried at all since the pgtable was well installed before the
throttling, so the next continuous fault (including taking mmap read lock,
walk the pgtable, etc.) could be in most cases unnecessary.
It's not only slowing down page faults for shared file-backed, but also add
more mmap lock contention which is in most cases not needed at all.
To observe this, one could try to write to some shmem page and look at
"pgfault" value in /proc/vmstat, then we should expect 2 counts for each
shmem write simply because we retried, and vm event "pgfault" will capture
that.
To make it more efficient, add a new VM_FAULT_COMPLETED return code just to
show that we've completed the whole fault and released the lock. It's also
a hint that we should very possibly not need another fault immediately on
this page because we've just completed it.
This patch provides a ~12% perf boost on my aarch64 test VM with a simple
program sequentially dirtying 400MB shmem file being mmap()ed and these are
the time it needs:
Before: 650.980 ms (+-1.94%)
After: 569.396 ms (+-1.38%)
I believe it could help more than that.
We need some special care on GUP and the s390 pgfault handler (for gmap
code before returning from pgfault), the rest changes in the page fault
handlers should be relatively straightforward.
Another thing to mention is that mm_account_fault() does take this new
fault as a generic fault to be accounted, unlike VM_FAULT_RETRY.
I explicitly didn't touch hmm_vma_fault() and break_ksm() because they do
not handle VM_FAULT_RETRY even with existing code, so I'm literally keeping
them as-is.
Link: https://lkml.kernel.org/r/20220530183450.42886-1-peterx@redhat.com
Signed-off-by: Peter Xu <peterx@redhat.com>
Acked-by: Geert Uytterhoeven <geert@linux-m68k.org>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: Vineet Gupta <vgupta@kernel.org>
Acked-by: Guo Ren <guoren@kernel.org>
Acked-by: Max Filippov <jcmvbkbc@gmail.com>
Acked-by: Christian Borntraeger <borntraeger@linux.ibm.com>
Acked-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc)
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Alistair Popple <apopple@nvidia.com>
Reviewed-by: Ingo Molnar <mingo@kernel.org>
Acked-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> [arm part]
Acked-by: Heiko Carstens <hca@linux.ibm.com>
Cc: Vasily Gorbik <gor@linux.ibm.com>
Cc: Stafford Horne <shorne@gmail.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Johannes Berg <johannes@sipsolutions.net>
Cc: Brian Cain <bcain@quicinc.com>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Richard Weinberger <richard@nod.at>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Janosch Frank <frankja@linux.ibm.com>
Cc: Albert Ou <aou@eecs.berkeley.edu>
Cc: Anton Ivanov <anton.ivanov@cambridgegreys.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Sven Schnelle <svens@linux.ibm.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Alexander Gordeev <agordeev@linux.ibm.com>
Cc: Jonas Bonn <jonas@southpole.se>
Cc: Will Deacon <will@kernel.org>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Michal Simek <monstr@monstr.eu>
Cc: Matt Turner <mattst88@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: David Hildenbrand <david@redhat.com>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Stefan Kristiansson <stefan.kristiansson@saunalahti.fi>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Cc: Chris Zankel <chris@zankel.net>
Cc: Hugh Dickins <hughd@google.com>
Cc: Dinh Nguyen <dinguyen@kernel.org>
Cc: Rich Felker <dalias@libc.org>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
Cc: Helge Deller <deller@gmx.de>
Cc: Yoshinori Sato <ysato@users.osdn.me>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Based on the normalized pattern:
this file is licensed under the terms of the gnu general public
license version 2 this program is licensed as is without any warranty
of any kind whether express or implied
extracted by the scancode license scanner the SPDX license identifier
GPL-2.0-only
has been chosen to replace the boilerplate/reference.
Reviewed-by: Allison Randal <allison@lohutok.net>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
This patch introduces new helper and places it in new header.
The helper's purpose is to assign any Xen specific DMA ops in
a single place. For now, we deal with xen-swiotlb DMA ops only.
The one of the subsequent commits in current series will add
xen-grant DMA ops case.
Also re-use the xen_swiotlb_detect() check on Arm32.
Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
[For arm64]
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/1654197833-25362-2-git-send-email-olekstysh@gmail.com
Signed-off-by: Juergen Gross <jgross@suse.com>
The second part of the multiplatform changes now converts the
Intel/Marvell PXA platform along with the rest. The patches went through
several rebases before the merge window as bugs were found, so they
remained separate.
This has to touch a lot of drivers, in particular the touchscreen,
pcmcia, sound and clk bits, to detach the driver files from the
platform and board specific header files.
-----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEEo6/YBQwIrVS28WGKmmx57+YAGNkFAmKZKqsACgkQmmx57+YA
GNnO/w//dgJBlkmoIIKlG2eJsvoUKwDt7MuLEMCqSqYYUSvMENFwKK66INMDIJ3l
PmKf94JadlpBm2OB2vzW+D1EtaLGX9eXZkKD+vyB1I1yFkKdzEPcAfitfrRwe58E
pR4nQd/jVL4UCY+pp442O1q9VvMpMV9P4ILJGPS/PpsD5CT9Gn8m9svIIuNuDRFd
nwpyZC3l32jVLo9iuLmwZUvxtOWI3hTqZrnxhByBhlvtnGexRsq/VhfubK2uzBi1
CyWHjqzOSmseGmsUDwv9LFqVV9YRCeisS3IElA5L0VgM0XvHKA+f9qyF7V6zI20g
y9LtqhdAtiTpE/aUrOW2LDYaM/bc7RilYZrWchoZbCEsHhV4C+ld3QoTyxvGscvG
tbznhvZKdUNX8LHS0J9NqIj1q1YGN5ei5r/C5R8DBj1q8VcTVnq3dms8xzVTd35o
xS5BbLFliiI96jc7S6LaQizXheYjAfdPhmXUAxNXvWIVQ6SXnf8/U/RB9Zzjb8hm
FH2Gu8m/Dh2MHKBBRWSVw8VahV0V7WiEaWeYuwwTbW1wUrsWiizVaPnqrt6Cq9DW
oJZgBvktWEXUQz73qrnvwo9GjcKqAxaWKWq05hHKHKuLGezsPAyIhIKr51V2xqqw
cp2OIMCsN5GYENOhHvt6BMRAI5iA4VyFDtWAqw9B6EIwno6N7Z4=
=cnSb
-----END PGP SIGNATURE-----
Merge tag 'arm-multiplatform-5.19-2' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc
Pull more ARM multiplatform updates from Arnd Bergmann:
"The second part of the multiplatform changes now converts the
Intel/Marvell PXA platform along with the rest. The patches went
through several rebases before the merge window as bugs were found, so
they remained separate.
This has to touch a lot of drivers, in particular the touchscreen,
pcmcia, sound and clk bits, to detach the driver files from the
platform and board specific header files"
* tag 'arm-multiplatform-5.19-2' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc: (48 commits)
ARM: pxa/mmp: remove traces of plat-pxa
ARM: pxa: convert to multiplatform
ARM: pxa/sa1100: move I/O space to PCI_IOBASE
ARM: pxa: remove support for MTD_XIP
ARM: pxa: move mach/*.h to mach-pxa/
ARM: PXA: fix multi-cpu build of xsc3
ARM: pxa: move plat-pxa to drivers/soc/
ARM: mmp: rename pxa_register_device
ARM: mmp: remove tavorevb board support
ARM: pxa: remove unused mach/bitfield.h
ARM: pxa: move clk register definitions to driver
ARM: pxa: move smemc register access from clk to platform
cpufreq: pxa3: move clk register access to clk driver
ARM: pxa: remove get_clk_frequency_khz()
ARM: pxa: pcmcia: move smemc configuration back to arch
ASoC: pxa: i2s: use normal MMIO accessors
ASoC: pxa: ac97: use normal MMIO accessors
ASoC: pxa: use pdev resource for FIFO regs
Input: wm97xx - get rid of irq_enable method in wm97xx_mach_ops
Input: wm97xx - switch to using threaded IRQ
...
This series has been 12 years in the making, it mostly finishes the
work that was started with the founding of Linaro to clean up platform
support in the kernel.
The largest change here is a cleanup of the omap1 platform, which
is the final ARM machine type to get converted to the common-clk
subsystem. All the omap1 specific drivers are now made independent of the
mach/*.h headers to allow the platform to be part of a generic ARMv4/v5
multiplatform kernel. The last bit that enables this support is still
missing here while we wait for some last dependencies to make it into
the mainline kernel through other subsystems.
The s3c24xx, ixp4xx, iop32x, ep93xx and dove platforms were all almost
at the point of allowing multiplatform kernels, this work gets completed
here along with a few additional cleanup. At the same time, the s3c24xx
and s3c64xx are now deprecated and expected to get removed in the future.
The PXA and OMAP1 bits are in a separate branch because of dependencies.
Once both branches are merged, only the three Intel StrongARM platforms
(RiscPC, Footbridge/NetWinder and StrongARM1100) need separate kernels,
and there are no plans to include these.
-----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEEo6/YBQwIrVS28WGKmmx57+YAGNkFAmKOP3sACgkQmmx57+YA
GNk+DhAAmrPNuS8JDlCRPa76Nd9PC9aitnnEGYytQ6bgwexKd3qdvP7gdUtr7jlV
8k4KiGnnZZjEGd4i5cAVhSCyBbCt4oPKhato62KneEsO19xLsVmmTpQg1LPK75do
mHYKpc+6932Lp6WrtI1F75id0phx684tpZp9P4ggXwMwgYkagq9rcO+mGUNZWDc8
D9SdAmoObtSCoBCYYbq2VhAPA79mSKKVpLGehzd+Gq5cuf/jJQD0u1E00izkdyZc
r/5acQ7PHQlVXqSONYgCpkvDTqmjg9cvVCKeKLpFspV3f6vBVRgV60UGfwhpdPHY
N119KUJtPf81xnLSxsqBFA34LMSerrH72YM5cYupKiiYcTDr+Yw6zrtNR6ktkt/B
F1Tc/QV+A9CGergxljy39G1smEuwKtNiVA//NSlUORCHxgwa5XUB0mQIzNcWARa4
oMDLhBF7ES211CB7Yto2FR6gBQbh2A9HSpjOh6kxdHrRb4FCgoXjPhzBoMxPoSFu
XIzJpMb18K4bI+hKRYddEOK5V0kHt9mzT7ViGT/2+n13IHKIGmKrZxwDH7mohAW9
4GF77gGbQsE9szajkx5EG1t+PWextQeeMyYW05bXO/mbDwA0n7EdjGpBeedvTZw3
6gUWVahfYp9hZWPdxJ4fbGnlbSovCq0y4tj5fbZHPh6AOAtmvWY=
=CTtN
-----END PGP SIGNATURE-----
Merge tag 'arm-multiplatform-5.19-1' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc
Pull ARMv4T/v5 multiplatform support from Arnd Bergmann:
"This series has been 12 years in the making, it mostly finishes the
work that was started with the founding of Linaro to clean up platform
support in the kernel.
The largest change here is a cleanup of the omap1 platform, which is
the final ARM machine type to get converted to the common-clk
subsystem. All the omap1 specific drivers are now made independent of
the mach/*.h headers to allow the platform to be part of a generic
ARMv4/v5 multiplatform kernel.
The last bit that enables this support is still missing here while we
wait for some last dependencies to make it into the mainline kernel
through other subsystems.
The s3c24xx, ixp4xx, iop32x, ep93xx and dove platforms were all almost
at the point of allowing multiplatform kernels, this work gets
completed here along with a few additional cleanup. At the same time,
the s3c24xx and s3c64xx are now deprecated and expected to get removed
in the future.
The PXA and OMAP1 bits are in a separate branch because of
dependencies. Once both branches are merged, only the three Intel
StrongARM platforms (RiscPC, Footbridge/NetWinder and StrongARM1100)
need separate kernels, and there are no plans to include these"
* tag 'arm-multiplatform-5.19-1' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc: (61 commits)
ARM: ixp4xx: Consolidate Kconfig fixing issue
ARM: versatile: Add missing of_node_put in dcscb_init
ARM: config: Refresh IXP4xx config after multiplatform
ARM: omap1: add back omap_set_dma_priority() stub
ARM: omap: fix missing declaration warnings
ARM: omap: fix address space warnings from sparse
ARM: spear: remove include/mach/ subdirectory
ARM: davinci: remove include/mach/ subdirectory
ARM: omap2: remove include/mach/ subdirectory
integrator: remove empty ap_init_early()
ARM: s3c: fix include path
MAINTAINERS: omap1: Add Janusz as an additional maintainer
ARM: omap1: htc_herald: fix typos in comments
ARM: OMAP1: fix typos in comments
ARM: OMAP1: clock: Remove noop code
ARM: OMAP1: clock: Remove unused code
ARM: OMAP1: clock: Fix UART rate reporting algorithm
ARM: OMAP1: clock: Fix early UART rate issues
ARM: OMAP1: Prepare for conversion of OMAP1 clocks to CCF
ARM: omap1: fix build with no SoC selected
...
These updates are for platform specific code in arch/arm/, mostly fixing
minor issues. The at91 platform gains support for better power
management on the lan966 platform and new firmware on the sama5
platform. The mediatek soc drivers in turn are enabled for the new
mt8195 SoC.
-----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEEo6/YBQwIrVS28WGKmmx57+YAGNkFAmKORHMACgkQmmx57+YA
GNnKYA//QhTTZePOMqHZHBVU4dqJuHrjd+rwME5u7a6hqSe3VYaFao6LF+P5fAzR
IWk8svgCJLIdRx4tnA+kE7LIBsJoPwXYxhGUlgM5CfrxYClQuofnWP58Z+EbIAXx
VIboLBW1Anw5y7zw+0JkB01DMuGzDDFnUJ4Y+0M3aW3VGC8wJXc+dM+t8BWOn7OW
DNTcXfJgkRJBtzT++tMU4Z7Kmn//i4xohuT7dwJW4vvIfTL350Bl5Z1EHGnScfMk
HgbJD5iF7Z12jj7ureuWengr/MN1eAPs18ZdWRcmo4Ie5IDWB50dMoYifjp/tslO
oXRODrGCuX9yOjcjleGU1mAbb7zqoaThZcbI0yjVOaVwyU0fQR9NFJTqmSEEX97p
LPZHU1CtAjRns7BW3ons9wPhy4s1xSPopJaHiXDwMMzAlx+kTcoFlFFvWCSlrcOC
MMo+eN1BQ3SbIxi3Ji3p6MnmSLIiE/wq4nKCEagKAv+AjtqRXYDIsT+28A96Xgeo
1R5agDamDMp0h6ZrhHMGSIgppWe+qvX14Jp3Hw0Nocgdyf8ABDR9H3/gZasXKbZu
LrbNLn3vNVKTMn9YpM0T94Uwf0nFPH6MOzFVmjahRute3TU7ioSXxbVPpNuiOkXj
RFXe1YUoILjLJgHZSUEpdHP+meGUjeScHTQt6jV97WfTeq2ZALg=
=Eoh1
-----END PGP SIGNATURE-----
Merge tag 'arm-soc-5.19' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc
Pull 32-bit ARM SoC updates from Arnd Bergmann:
"These updates are for platform specific code in arch/arm/, mostly
fixing minor issues.
The at91 platform gains support for better power management on the
lan966 platform and new firmware on the sama5 platform. The mediatek
soc drivers in turn are enabled for the new mt8195 SoC"
* tag 'arm-soc-5.19' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc: (34 commits)
ARM: at91: debug: add lan966 support
ARM: at91: pm: add support for sama5d2 secure suspend
ARM: at91: add code to handle secure calls
ARM: at91: Kconfig: implement PIT64B selection
ARM: at91: pm: add quirks for pm
ARM: at91: pm: use kernel documentation style
ARM: at91: pm: introduce macros for pm mode replacement
ARM: at91: pm: keep documentation inline with structure members
orion5x: fix typos in comments
ARM: hisi: Add missing of_node_put after of_find_compatible_node
ARM: shmobile: rcar-gen2: Drop comma after OF match table sentinel
ARM: shmobile: Drop commas after dt_compat sentinels
soc: mediatek: mutex: remove mt8195 MOD0 and SOF0 definition
MAINTAINERS: Add Broadcom BCMBCA entry
arm: bcmbca: add arch bcmbca machine entry
MAINTAINERS: Broadcom internal lists aren't maintainers
dt-bindings: pwrap: mediatek: Update pwrap document for mt8195
soc: mediatek: add DDP_DOMPONENT_DITHER0 enum for mt8195 vdosys0
soc: mediatek: add mtk-mutex support for mt8195 vdosys0
soc: mediatek: add mtk-mmsys support for mt8195 vdosys0
...
- don't over-decrypt memory (Robin Murphy)
- takes min align mask into account for the swiotlb max mapping size
(Tianyu Lan)
- use GFP_ATOMIC in dma-debug (Mikulas Patocka)
- fix DMA_ATTR_NO_KERNEL_MAPPING on xen/arm (me)
- don't fail on highmem CMA pages in dma_direct_alloc_pages (me)
- cleanup swiotlb initialization and share more code with swiotlb-xen
(me, Stefano Stabellini)
-----BEGIN PGP SIGNATURE-----
iQI/BAABCgApFiEEgdbnc3r/njty3Iq9D55TZVIEUYMFAmKObTQLHGhjaEBsc3Qu
ZGUACgkQD55TZVIEUYObmA//dIcDB/q4iFGD+WJh4MhM+asx0ZsdF2OJz42WEhgT
Z9duOrgcneEQundCamqJP9rNTs980LHDA8uWQC5rZEc9vxuRVOdS7bSgYRUwWh6B
r0ZjOsvQCn+ChoZML8uyk4rfmEINq+EvJuec3G5fgecZOhPuJS2i2uzzv5cHwqgP
ChC0fwyZlkfdECXgvZXbEoCJLfTgGNlziN6Ai8dirSoqgEQUoCsY89/M7OiEBvV2
R4XUWD7OvQERfB4t6xLuUHyzf9PAuWB+OiblRVNeAmK3lMjxVrc3k4kIowgklnzD
8hfmphAa9Zou3zdfi6Gd4fiQRHRVOwKVp1rtqUmJ+lPSiwyMzu64z9ld2+2qac0h
V4sSr/yJkhxnBT4/0MkTChvhnRobisackpUzNRpiM4ck7cNVb7eAvkISsbH+pWI9
aEexPhbyskjlV+GOyM4QL4ygG0dpXY0HSyoh6uaSVsaXMycnWIsJCPidXxV1HGV0
q2/RLHuHwYxia8cYCF01/DQvwOKSjwbU0zModxtRezGD5GYh2C0a+SrA1aX+qiTu
yGJCs2UHtSQstAt78tTVp499YeDeL/oGSQkPAu8zyRkSczzF+CncGTuXyoJbAWyK
otcgERWljgZ4scxjfu1uacfoVhKQ7nOu7hiJokL0U80FESAennLC3ZlocvB9h/ff
HNA=
=n2rk
-----END PGP SIGNATURE-----
Merge tag 'dma-mapping-5.19-2022-05-25' of git://git.infradead.org/users/hch/dma-mapping
Pull dma-mapping updates from Christoph Hellwig:
- don't over-decrypt memory (Robin Murphy)
- takes min align mask into account for the swiotlb max mapping size
(Tianyu Lan)
- use GFP_ATOMIC in dma-debug (Mikulas Patocka)
- fix DMA_ATTR_NO_KERNEL_MAPPING on xen/arm (me)
- don't fail on highmem CMA pages in dma_direct_alloc_pages (me)
- cleanup swiotlb initialization and share more code with swiotlb-xen
(me, Stefano Stabellini)
* tag 'dma-mapping-5.19-2022-05-25' of git://git.infradead.org/users/hch/dma-mapping: (23 commits)
dma-direct: don't over-decrypt memory
swiotlb: max mapping size takes min align mask into account
swiotlb: use the right nslabs-derived sizes in swiotlb_init_late
swiotlb: use the right nslabs value in swiotlb_init_remap
swiotlb: don't panic when the swiotlb buffer can't be allocated
dma-debug: change allocation mode from GFP_NOWAIT to GFP_ATIOMIC
dma-direct: don't fail on highmem CMA pages in dma_direct_alloc_pages
swiotlb-xen: fix DMA_ATTR_NO_KERNEL_MAPPING on arm
x86: remove cruft from <asm/dma-mapping.h>
swiotlb: remove swiotlb_init_with_tbl and swiotlb_init_late_with_tbl
swiotlb: merge swiotlb-xen initialization into swiotlb
swiotlb: provide swiotlb_init variants that remap the buffer
swiotlb: pass a gfp_mask argument to swiotlb_init_late
swiotlb: add a SWIOTLB_ANY flag to lift the low memory restriction
swiotlb: make the swiotlb_init interface more useful
x86: centralize setting SWIOTLB_FORCE when guest memory encryption is enabled
x86: remove the IOMMU table infrastructure
MIPS/octeon: use swiotlb_init instead of open coding it
arm/xen: don't check for xen_initial_domain() in xen_create_contiguous_region
swiotlb: rename swiotlb_late_init_with_default_size
...
Two further fixes for Spectre-BHB from Ard for Cortex A15 and
to use the wide branch instruction for Thumb2.
-----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEEuNNh8scc2k/wOAE+9OeQG+StrGQFAmKGSkoACgkQ9OeQG+St
rGRvOQ/9HRdpL1sSRGh3zYU3g9aTZURuqYlVn4nqcbsmDVN37sCqwfhG7Lcb97Di
ykFP7oizj0/MGUr30xTNyoEEF4RdJXYgwPUV7I9QZt5NBXHVFKPcYeHUAfg1BXn7
ClILk3ibL15MFxahpFGTUm970cWUhGMku5NjFTCo16qj2TbbUWibdNY2/FhABL63
Dbn/0/kQrZJLkMDMQ8HwslpDBABa0iHGK7yN6O8viUtZy5r3Og2c4D2Te6wiAOxv
FtPFkMb/1TXMPx8207gSlZcGUdhwpidsv/ILd+GOuaPCELoZ2C1vHWaw0ol9w2VQ
KLsMqBAr5uPpvEqm5ocrhU3SJATLU8rvqDDd10MbgJP97XdDGvgV3GIE81DUpQiH
9tICTLChNoSxOTAMwKYZNMfp108nLDvFVR/4uzcIu8PdzW31zU3IG3pCN38DDyew
chVnlW1iTFPE504oX/LfvQZt4gqYYCPxgCU7cRm8yYSJ3fWAbzzkdTmfCOzi5rrm
GM3zCoKc91Z0RlRhcn1meOoHlDh+wXU4arDEdPw4cBPg7KyzwfuuGJv8rvpJWnnH
TlJuadtr5/5q/5CH8gONx2Sjo8hQOG9WcC1iNR7IsAI3M2fbZMOnyKOD/njAY+O4
kHgFI0kmWAWa9PaCfjCvzQ2jMzFFBRnocNSxMPoeM5E3GP8NA2A=
=TN14
-----END PGP SIGNATURE-----
Merge tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-arm
Pull ARM fixes from Russell King:
"Two further fixes for Spectre-BHB from Ard for Cortex A15 and to use
the wide branch instruction for Thumb2"
* tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-arm:
ARM: 9197/1: spectre-bhb: fix loop8 sequence for Thumb2
ARM: 9196/1: spectre-bhb: enable for Cortex-A15
The Spectre-BHB mitigations were inadvertently left disabled for
Cortex-A15, due to the fact that cpu_v7_bugs_init() is not called in
that case. So fix that.
Fixes: b9baf5c8c5 ("ARM: Spectre-BHB workaround")
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
The semantics of pfn_valid() is to check presence of the memory map for a
PFN and not whether a PFN is covered by the linear map. The memory map
may be present for NOMAP memory regions, but they won't be mapped in the
linear mapping. Accessing such regions via __va() when they are
memremap()'ed will cause a crash.
On v5.4.y the crash happens on qemu-arm with UEFI [1]:
<1>[ 0.084476] 8<--- cut here ---
<1>[ 0.084595] Unable to handle kernel paging request at virtual address dfb76000
<1>[ 0.084938] pgd = (ptrval)
<1>[ 0.085038] [dfb76000] *pgd=5f7fe801, *pte=00000000, *ppte=00000000
...
<4>[ 0.093923] [<c0ed6ce8>] (memcpy) from [<c16a06f8>] (dmi_setup+0x60/0x418)
<4>[ 0.094204] [<c16a06f8>] (dmi_setup) from [<c16a38d4>] (arm_dmi_init+0x8/0x10)
<4>[ 0.094408] [<c16a38d4>] (arm_dmi_init) from [<c0302e9c>] (do_one_initcall+0x50/0x228)
<4>[ 0.094619] [<c0302e9c>] (do_one_initcall) from [<c16011e4>] (kernel_init_freeable+0x15c/0x1f8)
<4>[ 0.094841] [<c16011e4>] (kernel_init_freeable) from [<c0f028cc>] (kernel_init+0x8/0x10c)
<4>[ 0.095057] [<c0f028cc>] (kernel_init) from [<c03010e8>] (ret_from_fork+0x14/0x2c)
On kernels v5.10.y and newer the same crash won't reproduce on ARM because
commit b10d6bca87 ("arch, drivers: replace for_each_membock() with
for_each_mem_range()") changed the way memory regions are registered in
the resource tree, but that merely covers up the problem.
On ARM64 memory resources registered in yet another way and there the
issue of wrong usage of pfn_valid() to ensure availability of the linear
map is also covered.
Implement arch_memremap_can_ram_remap() on ARM and ARM64 to prevent access
to NOMAP regions via the linear mapping in memremap().
Link: https://lore.kernel.org/all/Yl65zxGgFzF1Okac@sirena.org.uk
Link: https://lkml.kernel.org/r/20220426060107.7618-1-rppt@kernel.org
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Reported-by: "kernelci.org bot" <bot@kernelci.org>
Tested-by: Mark Brown <broonie@kernel.org>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Mark Brown <broonie@kernel.org>
Cc: Mark-PK Tsai <mark-pk.tsai@mediatek.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Tony Lindgren <tony@atomide.com>
Cc: Will Deacon <will@kernel.org>
Cc: <stable@vger.kernel.org> [5.4+]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
PXA and StrongARM1100 traditionally map their I/O space 1:1 into virtual
memory, using a per-bus io_offset that matches the base address of the
ioremap mapping.
In order for PXA to work in a multiplatform config, this needs to
change so I/O space starts at PCI_IOBASE (0xfee00000). Since the pcmcia
soc_common support is shared with StrongARM1100, both have to change at
the same time. The affected machines are:
- Anything with a PCMCIA slot now uses pci_remap_iospace, which
is made available to PCMCIA configurations as well, rather than
just PCI. The first PCMCIA slot now starts at port number 0x10000.
- The Zeus and Viper platforms have PC/104-style ISA buses,
which have a static mapping for both I/O and memory space at
0xf1000000, which can no longer work. It does not appear to have
any in-tree users, so moving it to port number 0 makes them
behave like a traditional PC.
- SA1100 does support ISA slots in theory, but all machines that
originally enabled this appear to have been removed from the tree
ages ago, and the I/O space is never mapped anywhere.
- The Nanoengine machine has support for PCI slots, but looks
like this never included I/O space, the resources only define the
location for memory and config space.
With this, the definitions of __io() and IO_SPACE_LIMIT can be simplified,
as the only remaining cases are the generic PCI_IOBASE and the custom
inb()/outb() macros on RiscPC. S3C24xx still has a custom inb()/outb()
in this here, but this is already removed in another branch.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
On a kernel that includes both ARMv4 and XScale support,
the copypage function fails to build with invalid
instructions.
Since these are only called on an actual XScale processor,
annotate the assembly with the correct .arch directive.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Pass a boolean flag to indicate if swiotlb needs to be enabled based on
the addressing needs, and replace the verbose argument with a set of
flags, including one to force enable bounce buffering.
Note that this patch removes the possibility to force xen-swiotlb use
with the swiotlb=force parameter on the command line on x86 (arm and
arm64 never supported that), but this interface will be restored shortly.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Tested-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Choosing big-endian vs little-endian kernels in Kconfig has not worked
correctly since the introduction of CONFIG_ARCH_MULTIPLATFORM a long
time ago.
The problems is that CONFIG_BIG_ENDIAN depends on
ARCH_SUPPORTS_BIG_ENDIAN, which can set by any one platform
in the config, but would actually have to be supported by all
of them.
This was mostly ok for ARMv6/ARMv7 builds, since these are BE8 and
tend to just work aside from problems in nonportable device drivers.
For ARMv4/v5 machines, CONFIG_BIG_ENDIAN and CONFIG_ARCH_MULTIPLATFORM
were never set together, so this was disabled on all those machines
except for IXP4xx.
As IXP4xx can now become part of ARCH_MULTIPLATFORM, it seems better to
formalize this logic: all ARMv4/v5 platforms get an explicit dependency
on being either big-endian (ixp4xx) or little-endian (the rest). We may
want to fix ixp4xx in the future to support both, but it does not work
in LE mode at the moment.
For the ARMv6/v7 platforms, there are two ways this could be handled
a) allow both modes only for platforms selecting
'ARCH_SUPPORTS_BIG_ENDIAN' today, but only LE mode for the
others, given that these were added intentionally at some
point.
b) allow both modes everwhere, given that it was already possible
to build that way by e.g. selecting ARCH_VIRT, and that the
list is not an accurate reflection of which platforms may or
may not work.
Out of these, I picked b) because it seemed slighly more logical
to me.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Various spelling mistakes in comments.
Detected with the help of Coccinelle.
Signed-off-by: Julia Lawall <Julia.Lawall@inria.fr>
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
- Disable CONFIG_HARDENED_USERCOPY_PAGESPAN
- DMA: remove CMA code when not buiding CMA
-----BEGIN PGP SIGNATURE-----
iQJKBAABCgA0FiEEpcP2jyKd1g9yPm4TiXL039xtwCYFAmJF8n0WHGtlZXNjb29r
QGNocm9taXVtLm9yZwAKCRCJcvTf3G3AJtdND/9Ppip+WsqdPi7NfTpMoJTMshMc
nFTkmC9jAdvXomXjV0ogXNNJASvWRRKfw+I3dsMJd0qedJG1Cw62qRDIXMBB6VZU
A46wG3KVSy6LVTVfNZTb74gUucdoRstAhHzY3NssAD7cuQWi+/A96NV9nII83Ft8
ew1zz1evRUdfrTXZvP4NfzVmJsJ0BfuvSf+7f4jy1rDN/COvAIqB4Oyjv4ehWstX
jk3DEDQlr13wtbBX5BO/OihbB1Le/v+EpKnthofF+7myv7u6OqSeYq7RGh0n1hEz
Q1BSY54WoW3sHY4sjX5j1uVOwW3d0JDfhd8J8ruf79WPnSfoPmP2d1WtUAmkKQgT
dbDFDMbsfzZQ/AAK9Eh9FyQIr46jPqZUZYvFNReJLFOebqb4/34e4Z1sovlrOxQF
o4mUGMmzpwvjl96QbQb5wcZ8tno67VQ/Wu0fw6m45mbCyN+Pzyos0IAOnbP4nbnx
8jWheAU92mqAn/rWlq39ORTVH3HGh1MZJTNovf/7Cdu5MkjJEUst0sWUaT1I9ngw
EgfQPWoeQ8qNsNPdM+ah3p8JXWZcJUjKQB1vuz1ps9XEBcjP12vaVHa1rqkU2VnJ
+IK+wZzbbFFrMzCiW82HrtXhNEN6OROlxy8cENg3GVDAmhN0ocVPaBPeVGQjm6MD
B6UEb7wpz7GMweP4MQ==
=VR9d
-----END PGP SIGNATURE-----
Merge tag 'hardening-v5.18-rc1-fix1' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux
Pull hardening updates from Kees Cook:
"This addresses an -Warray-bounds warning found under a few ARM
defconfigs, and disables long-broken HARDENED_USERCOPY_PAGESPAN"
* tag 'hardening-v5.18-rc1-fix1' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux:
ARM/dma-mapping: Remove CMA code when not built with CMA
usercopy: Disable CONFIG_HARDENED_USERCOPY_PAGESPAN
The MAX_CMA_AREAS could be set to 0, which would result in code that would
attempt to operate beyond the end of a zero-sized array. If CONFIG_CMA
is disabled, just remove this code entirely. Found when building arm
on GCC 10.x for several defconfigs (e.g. axm55xx_defconfig) under
-Warray-bounds:
arch/arm/mm/dma-mapping.c:396:22: warning: array subscript <unknown> is outside array bounds of 'struct dma_contig_early_reserve[0]' [-Warray-bounds]
396 | dma_mmu_remap[dma_mmu_remap_num].size = size;
| ~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~
arch/arm/mm/dma-mapping.c:389:40: note: while referencing 'dma_mmu_remap'
389 | static struct dma_contig_early_reserve dma_mmu_remap[MAX_CMA_AREAS] __initdata;
| ^~~~~~~~~~~~~
Cc: Russell King <linux@armlinux.org.uk>
Cc: Logan Gunthorpe <logang@deltatee.com>
Cc: Martin Oliveira <martin.oliveira@eideticom.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Stephen Rothwell <sfr@canb.auug.org.au>
Cc: Zi Yan <ziy@nvidia.com>
Cc: Hari Bathini <hbathini@linux.ibm.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Cc: linux-arm-kernel@lists.infradead.org
Link: https://lore.kernel.org/all/6243ee60.1c69fb81.16de6.7dbf@mx.google.com/
Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Link: https://lore.kernel.org/lkml/20220310070041.GA24874@lst.de
Reviewed-by: David Hildenbrand <david@redhat.com>
Link: https://lore.kernel.org/lkml/9059fa71-330f-f04f-b155-2850abb72a71@redhat.com
Updates for IRQ stacks and virtually mapped stack support for ARM from
the following pull requests, etc:
1) ARM: support for IRQ and vmap'ed stacks
This PR covers all the work related to implementing IRQ stacks and
vmap'ed stacks for all 32-bit ARM systems that are currently supported
by the Linux kernel, including RiscPC and Footbridge. It has been
submitted for review in three different waves:
- IRQ stacks support for v7 SMP systems [0],
- vmap'ed stacks support for v7 SMP systems[1],
- extending support for both IRQ stacks and vmap'ed stacks for all
remaining configurations, including v6/v7 SMP multiplatform kernels
and uniprocessor configurations including v7-M [2]
[0] https://lore.kernel.org/linux-arm-kernel/20211115084732.3704393-1-ardb@kernel.org/
[1] https://lore.kernel.org/linux-arm-kernel/20211122092816.2865873-1-ardb@kernel.org/
[2] https://lore.kernel.org/linux-arm-kernel/20211206164659.1495084-1-ardb@kernel.org/
2) ARM: support for IRQ and vmap'ed stacks [v6]
This tag covers the changes between the version of vmap'ed + IRQ stacks
support pulled into rmk/devel-stable [0] (which was dropped from v5.17
due to issues discovered too late in the cycle), and my v5 proposed for
the v5.18 cycle [1].
[0] git://git.kernel.org/pub/scm/linux/kernel/git/ardb/linux.git arm-irq-and-vmap-stacks-for-rmk
[1] https://lore.kernel.org/linux-arm-kernel/20220124174744.1054712-1-ardb@kernel.org/
3) ARM: ftrace fixes and cleanups
Make all flavors of ftrace available on all builds, regardless of ISA
choice, unwinder choice or compiler:
- use ADD not POP where possible
- fix a couple of Thumb2 related issues
- enable HAVE_FUNCTION_GRAPH_FP_TEST for robustness
- enable the graph tracer with the EABI unwinder
- avoid clobbering frame pointer registers to make Clang happy
Link: https://lore.kernel.org/linux-arm-kernel/20220203082204.1176734-1-ardb@kernel.org/
4) Fixes for the above.
-----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEEuNNh8scc2k/wOAE+9OeQG+StrGQFAmI7U9IACgkQ9OeQG+St
rGQghg/+PmgLJ9zmJrMGOarNLmmGzCbkPi6SrlbaDxriE7ofqo76qrQhGAsWxvDx
OEYBNdWmOxTi7GP6sozFaTWrpD2ZbuFuKUUpusnjU2sMD/BwYHZZ/lKfZpn7WoE0
48e2FCFYsJ3sYpROhVgaFWk+64eVwHfZ7pr9pad1gAEB4SAaT+CiuXBsJCl4DBi7
eobYzLqETtCBkXFUo46n6r0xESdzQfgfZMsh5IpPRyswSPhzqdYrSLXJRmFGBqvT
FS2gcMgd7IpcVsmd4pgFKe0Y9rBSuMEqsqYvzwSWI4GAgFppZO1R5RvHdS89US4P
9F6hgxYnJdc8hVhoAZNNi5cCcJp9th3Io97YzTUIm0xgK3nXyhsSGWIk3ahx76mX
mnCcflUoOP9YVHUuoi1/N7iSe6xwtH+dg0Mn69aM4rNcZh5J59jV2rrNhdnr1Pjb
XE8iQHJZATHZrxyAtj7PzlnNzJsfVcJyT/WieT0My7tZaZC0cICdKEJ6yurTlCvE
v7P3EHUYFaQGkQijHFJdstkouY7SHpN0iH18xKErciWOwDmRsgVaoxw18iNIvuY/
TvSNXJBDgh8is8eV/mmN0iVkK0mYTxhy0G5CHavrgy8STWNC6CdqFtrxZnInoCAz
wq25QvQtPZcxz1dS9bTuWUfrPATaIeQeCzUsAIiE7u9aP/olL5M=
=AVCL
-----END PGP SIGNATURE-----
Merge tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-arm
Pull ARM updates from Russell King:
"Updates for IRQ stacks and virtually mapped stack support, and ftrace:
- Support for IRQ and vmap'ed stacks
This covers all the work related to implementing IRQ stacks and
vmap'ed stacks for all 32-bit ARM systems that are currently
supported by the Linux kernel, including RiscPC and Footbridge. It
has been submitted for review in four different waves:
- IRQ stacks support for v7 SMP systems [0]
- vmap'ed stacks support for v7 SMP systems[1]
- extending support for both IRQ stacks and vmap'ed stacks for all
remaining configurations, including v6/v7 SMP multiplatform
kernels and uniprocessor configurations including v7-M [2]
- fixes and updates in [3]
- ftrace fixes and cleanups
Make all flavors of ftrace available on all builds, regardless of
ISA choice, unwinder choice or compiler [4]:
- use ADD not POP where possible
- fix a couple of Thumb2 related issues
- enable HAVE_FUNCTION_GRAPH_FP_TEST for robustness
- enable the graph tracer with the EABI unwinder
- avoid clobbering frame pointer registers to make Clang happy
- Fixes for the above"
[0] https://lore.kernel.org/linux-arm-kernel/20211115084732.3704393-1-ardb@kernel.org/
[1] https://lore.kernel.org/linux-arm-kernel/20211122092816.2865873-1-ardb@kernel.org/
[2] https://lore.kernel.org/linux-arm-kernel/20211206164659.1495084-1-ardb@kernel.org/
[3] https://lore.kernel.org/linux-arm-kernel/20220124174744.1054712-1-ardb@kernel.org/
[4] https://lore.kernel.org/linux-arm-kernel/20220203082204.1176734-1-ardb@kernel.org/
* tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-arm: (62 commits)
ARM: fix building NOMMU ARMv4/v5 kernels
ARM: unwind: only permit stack switch when unwinding call_with_stack()
ARM: Revert "unwind: dump exception stack from calling frame"
ARM: entry: fix unwinder problems caused by IRQ stacks
ARM: unwind: set frame.pc correctly for current-thread unwinding
ARM: 9184/1: return_address: disable again for CONFIG_ARM_UNWIND=y
ARM: 9183/1: unwind: avoid spurious warnings on bogus code addresses
Revert "ARM: 9144/1: forbid ftrace with clang and thumb2_kernel"
ARM: mach-bcm: disable ftrace in SMC invocation routines
ARM: cacheflush: avoid clobbering the frame pointer
ARM: kprobes: treat R7 as the frame pointer register in Thumb2 builds
ARM: ftrace: enable the graph tracer with the EABI unwinder
ARM: unwind: track location of LR value in stack frame
ARM: ftrace: enable HAVE_FUNCTION_GRAPH_FP_TEST
ARM: ftrace: avoid unnecessary literal loads
ARM: ftrace: avoid redundant loads or clobbering IP
ARM: ftrace: use trampolines to keep .init.text in branching range
ARM: ftrace: use ADD not POP to counter PUSH at entry
ARM: ftrace: ensure that ADR takes the Thumb bit into account
ARM: make get_current() and __my_cpu_offset() __always_inline
...
- amba bus cleanups
- conversion to use reserve_initrd_mem()
- remove -nostdlib from vdso link
-----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEEuNNh8scc2k/wOAE+9OeQG+StrGQFAmIsY3EACgkQ9OeQG+St
rGSwTA//dj+sV2ejTFR9QzfqcCq+S3ua4D8i+6x7O5apDPiFosIA8QqqAmE073Nr
/KW/nu7hoAUDFHC8yuoR8Hg00+XaOeL7Z4k5Q8NtKv3iBR4z7L+Wpwed91rWl6gk
hMuXeWkDNndNfNDQe1/Ert2h7yuHINwg+QKZ1ZIkjDI+tAGksW/pNMTvXlVzGqAo
zVW1iTW6TntvrXqlSW4a1MKFAayGNHOBcTCiY9aJV6amcogWDHoHWNuU2SBD9vHY
QWrTFIaif376dcAo+E2u57B2WTw3WhVVJCnQqMOGbaoZJoojLyPRn/GivHLhyRnt
LhneRjGB7N6KX5eqrVSb28n6Q5l5TTCPT8cx/1zMnujplxJAx7Y3SNjhdwOfBNT+
MIOlCVexns4z3XlYFMmA7XPgKrT3voxnNkZvRfrAQuLkf6UZrQ72OCPl13l3XaF/
msXHdviFKbbVhZQpeakB+ww1H08zNUnBWrvU9laMeQw12ElOc9I8bl6lDWsTePd3
OBvSHNf/tvNoO1ICzeaqdFob7oHXpMw5ZifI9OjHCB8evzWnHZqNVkxOEgy6KgAR
7AmDZSMfCS02S4F3MSxcAiJXk2WBGmXOHA1Z7zRFmGLol4qFMzlAISe9Xt/Ykh8c
Cdf3G7XGpxSUOrXcDPNy/FORIez6rf2Da1xiqUxWhc5Dhu6EaFE=
=x+Ji
-----END PGP SIGNATURE-----
Merge tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-arm
Pull ARM updates from Russell King:
- amba bus cleanups
- conversion to use reserve_initrd_mem()
- remove -nostdlib from vdso link
* tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-arm:
ARM: 9181/1: vdso: remove -nostdlib compiler flag
ARM: 9175/1: Convert to reserve_initrd_mem()
ARM: 9174/1: amba: Move EXPORT_SYMBOL() closer to definition
ARM: 9173/1: amba: kill amba_find_match()
ARM: 9172/1: amba: Cleanup amba pclk operation
The kernel test robot discovered that building without
HARDEN_BRANCH_PREDICTOR issues a warning due to a missing
argument to pr_info().
Add the missing argument.
Reported-by: kernel test robot <lkp@intel.com>
Fixes: 9dd78194a3 ("ARM: report Spectre v2 status through sysfs")
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
These patches add Spectre BHB migitations for the following Arm CPUs to
the 32-bit ARM kernels:
Cortex-A15
Cortex-A57
Cortex-A72
Cortex-A73
Cortex A75
Brahma B15
for CVE-2022-23960.
-----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEEuNNh8scc2k/wOAE+9OeQG+StrGQFAmInch4ACgkQ9OeQG+St
rGT62Q//Xve9O5C6d3I+7hwzVUGgRmYszrLRqLDG2qFP3Vw7hx1VygFRovKiFPD5
jvVHWMIC6Yev4D7N2DjXpmfULOrL7277EX31QFpdtkvNR5WrSAV7ku0mJm4UmE6+
WWo3l7d7WfxnbN7ZhRpISYc6aPm0/oYhH6Oux0FXe9eKWVr+hnNjVqBVaoSbnomy
AcYhh1yy3p680zKvarUndLkYPgCPiCci7+IozxD4MfBV/M5IlIDawW9P0lxMgMZR
ZbUe6t2k1Tis2EH2gKtj7KB0sDxAUnMD8tWYQylYsBM8wIINLDifuMSBrgU4ZcML
3stf7IBynn7oA8U+4jrIwc1OEBj64UYqQEPTqg8jaogAB+JfPINNxp7Byq1LnuJm
iwnmgeapQLRR3sh2jx8C4Boexv9KyIYAhIc2MkciyUlLbBWABLPNxp5cycz5znUr
mSBPeSj2F0A10LdPT8NauHJj8m2j1U67tyBcRFO6z+T6+krR6zk+Aiqb/XyWOQbN
Fe3D0SqOw5bd8hDenO5wGqQAuPpKhQhIo+XsbxckQ3jMtFKAABGkCW08gTTmfeDg
kg56GCvedrzGdZs7xkAzJ/o/AtNxYNdYjWnfc+zJmkLMPbt2qunL7yUkwOuiru29
biCMyw6j0afPpt7ScJAASTKyuaUgE3HxxWTnk1rgCsl3Ho8MeLU=
=VHyX
-----END PGP SIGNATURE-----
Merge tag 'for-linus-bhb' of git://git.armlinux.org.uk/~rmk/linux-arm
Pull ARM spectre fixes from Russell King:
"ARM Spectre BHB mitigations.
These patches add Spectre BHB migitations for the following Arm CPUs
to the 32-bit ARM kernels:
- Cortex A15
- Cortex A57
- Cortex A72
- Cortex A73
- Cortex A75
- Brahma B15
for CVE-2022-23960"
* tag 'for-linus-bhb' of git://git.armlinux.org.uk/~rmk/linux-arm:
ARM: include unprivileged BPF status in Spectre V2 reporting
ARM: Spectre-BHB workaround
ARM: use LOADADDR() to get load address of sections
ARM: early traps initialisation
ARM: report Spectre v2 status through sysfs
Workaround the Spectre BHB issues for Cortex-A15, Cortex-A57,
Cortex-A72, Cortex-A73 and Cortex-A75. We also include Brahma B15 as
well to be safe, which is affected by Spectre V2 in the same ways as
Cortex-A15.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
As per other architectures, add support for reporting the Spectre
vulnerability status via sysfs CPU.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Covert to the generic reserve_initrd_mem() function.
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
early_param() handlers should return 0 on success.
__setup() handlers should return 1 on success, i.e., the parameter
has been handled. A return of 0 would cause the "option=value" string
to be added to init's environment strings, polluting it.
../arch/arm/mm/mmu.c: In function 'test_early_cachepolicy':
../arch/arm/mm/mmu.c:215:1: error: no return statement in function returning non-void [-Werror=return-type]
../arch/arm/mm/mmu.c: In function 'test_noalign_setup':
../arch/arm/mm/mmu.c:221:1: error: no return statement in function returning non-void [-Werror=return-type]
Fixes: b849a60e09 ("ARM: make cr_alignment read-only #ifndef CONFIG_CPU_CP15")
Signed-off-by: Randy Dunlap <rdunlap@infradead.org>
Reported-by: Igor Zhbanov <i.zhbanov@omprussia.ru>
Cc: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Cc: linux-arm-kernel@lists.infradead.org
Cc: patches@armlinux.org.uk
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Thumb2 uses R7 rather than R11 as the frame pointer, and even if we
rarely use a frame pointer to begin with when building in Thumb2 mode,
there are cases where it is required by the compiler (Clang when
inserting profiling hooks via -pg)
However, preserving and restoring the frame pointer is risky, as any
unhandled exceptions raised in the mean time will produce a bogus
backtrace, and it would be better not to touch the frame pointer at all.
This is the case even when CONFIG_FRAME_POINTER is not set, as the
unwind directive used by the unwinder may also use R7 or R11 as the
unwind anchor, even if the frame pointer is not managed strictly
according to the frame pointer ABI.
So let's tweak the cacheflush asm code not to clobber R7 or R11 at all,
so that we can drop R7 from the clobber lists of the inline asm blocks
that call these routines, and remove the code that preserves/restores
R11.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Nick Desaulniers <ndesaulniers@google.com>
Rework the vmalloc_seq handling so it can be used safely under SMP, as
we started using it to ensure that vmap'ed stacks are guaranteed to be
mapped by the active mm before switching to a task, and here we need to
ensure that changes to the page tables are visible to other CPUs when
they observe a change in the sequence count.
Since LPAE needs none of this, fold a check against it into the
vmalloc_seq counter check after breaking it out into a separate static
inline helper.
Given that vmap'ed stacks are now also supported on !SMP configurations,
let's drop the WARN() that could potentially now fire spuriously.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>