- remove unneeded use of cc-option, cc-disable-warning, cc-ldoption
- exclude tracked files from .gitignore
- re-enable -Wint-in-bool-context warning
- refactor samples/Makefile
- stop building immediately if syncconfig fails
- do not sprinkle error messages when $(CC) does not exist
- move arch/alpha/defconfig to the configs subdirectory
- remove crappy header search path manipulation
- add comment lines to .config to clarify the end of menu blocks
- check uniqueness of module names (adding new warnings intentionally)
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJc4KbfAAoJED2LAQed4NsGQW4P/3Iu+hM91EXqzdLqit99ML0x
9hI3Ap0Xd5HvPy+j0xYmAeXyBYolGOVcwGNLv2WMPxOkGFxcBTfVmBgYvugboX30
dwLMSXnL6MWL3xFVERTmRPl1STNwqHTnGal7Pru4HGUNnYUZJ9f9BGxoh0Qz55DJ
j5ZQ9RhnYS+DmEo6OmKJepoiM77Pf3W+prCegoZ+5ZtWnYd622p9d/rjiimR94RU
e5SFvGpSWbaUdaigjUqpA/GYW/iLy0T36xIUe20FQKe/8wstGOq61qKuPfgvxktA
y1ajRfQbZ/8D3lJyIU7AOZ9xgoY05lmbTEPb6eP8WygG7chKRakX/vy1+AHmDp9D
1K0IuwVdvFN2Qh5eLYkdtjX0tEUxX+m+CRv2NS75mNY/O3sTa7Yu6ZyZfCFRvGft
ZOU6Axskr/L0f048WLs0TGpDD8wyi9d2VZywKH+7Kr9KImeqGU0kRD/JzgvCFmcy
6lAfkptIMIL+VgSeN1igbJz6RjlGfTFvXPymvPzAcrTsVaxGHtjia/SC32h09Nw6
gS2vD2lv/sO0EaSUjwlwrHC0HVj3lEX3cIA0RGabbfbht1D7mnDjOyi7HWnbs80D
RHT4LCUuuPTufqCe+MQiQb08mcOSKcd58JdjZ1nfI/z/ME1zVkioMh63TFUPOfSX
T8zcEjd0wxdeLnyNj6fK
=oDeB
-----END PGP SIGNATURE-----
Merge tag 'kbuild-v5.2-2' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild
Pull more Kbuild updates from Masahiro Yamada:
- remove unneeded use of cc-option, cc-disable-warning, cc-ldoption
- exclude tracked files from .gitignore
- re-enable -Wint-in-bool-context warning
- refactor samples/Makefile
- stop building immediately if syncconfig fails
- do not sprinkle error messages when $(CC) does not exist
- move arch/alpha/defconfig to the configs subdirectory
- remove crappy header search path manipulation
- add comment lines to .config to clarify the end of menu blocks
- check uniqueness of module names (adding new warnings intentionally)
* tag 'kbuild-v5.2-2' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild: (24 commits)
kconfig: use 'else ifneq' for Makefile to improve readability
kbuild: check uniqueness of module names
kconfig: Terminate menu blocks with a comment in the generated config
kbuild: add LICENSES to KBUILD_ALLDIRS
kbuild: remove 'addtree' and 'flags' magic for header search paths
treewide: prefix header search paths with $(srctree)/
media: prefix header search paths with $(srctree)/
media: remove unneeded header search paths
alpha: move arch/alpha/defconfig to arch/alpha/configs/defconfig
kbuild: terminate Kconfig when $(CC) or $(LD) is missing
kbuild: turn auto.conf.cmd into a mandatory include file
.gitignore: exclude .get_maintainer.ignore and .gitattributes
kbuild: add all Clang-specific flags unconditionally
kbuild: Don't try to add '-fcatch-undefined-behavior' flag
kbuild: add some extra warning flags unconditionally
kbuild: add -Wvla flag unconditionally
arch: remove dangling asm-generic wrappers
samples: guard sub-directories with CONFIG options
kbuild: re-enable int-in-bool-context warning
MAINTAINERS: kbuild: Add pattern for scripts/*vmlinux*
...
This patch set contains an assortment of RISC-V related patches that I'd
like to target for the 5.2 merge window. Most of the patches are
cleanups, but there are a handful of user-visible changes:
* The nosmp and nr_cpus command-line arguments are now supported, which
work like normal.
* The SBI console no longer installs itself as a preferred console, we
rely on standard mechanisms (/chosen, command-line, hueristics)
instead.
* sfence_remove_sfence_vma{,_asid} now pass their arguments along to the
SBI call.
* Modules now support BUG().
* A missing sfence.vma during boot has been added. This bug only
manifests during boot.
* The arch/riscv support for SiFive's L2 cache controller has been
merged, which should un-block the EDAC framework work.
I've only tested this on QEMU again, as I didn't have time to get things
running on the Unleashed. The latest master from this morning merges in
cleanly and passes the tests as well.
This patch set rebased my "5.2 MW, Part 1" patch set which includes an
erronous empty file. It's also a rebase of my "5.2 MW, Part 2" patch
set, in which I managed to create another file while attempting to
remove the empty file.
Sorry for all the noise!
-----BEGIN PGP SIGNATURE-----
iQJHBAABCAAxFiEEAM520YNJYN/OiG3470yhUCzLq0EFAlzeLhUTHHBhbG1lckBk
YWJiZWx0LmNvbQAKCRDvTKFQLMurQXV/D/9nz8KYxNKOVIXft27mw93Qnx5joblg
fibA7nGDuxCszSC3tfyaROJZuGKe1G24vP4RG7aVs+iwRmmFhtVdPwm7ZvIr+DfU
a5mzwWkxhMZP8lgxMAIn7iM/NWrBm7rWdGTU0BYjHlGkQ5z3WA67rU/r/vrowhUN
zK1U/ATLvFWDJv5rdDj8/T2rDJzWtAsuy2qlmQN30CCJoOXXgIdAj+fVG4IYoxO9
2+NFJU4Y0a+YczWW3qaGFjTaYYt/sNr/uA8AoBNqV1NvsopK1UO3txbcfJwvZZC3
JFU9WBjC7xuF2ihMWecIZ7XljZeqhlsP7lZDizatQ/mdL9k7+6elk1sdcNLC23dN
VWJakudE42dISCwSh49fAbeNSl/3R5VWSlZmVO18gsmslkGa4FwuoKjklnxx7hYx
fQfvaqMIEXy3YmKtmFneUXLdcGoWOjV0FfDh5Ye582tAmB2TzvgEJHPJI7suUA/a
RkZHcmVJTSRBMe2fS0qkYxy/wdIDtRW2yjypssl9G6zQPPCVW+maD70m/9oVdsgm
IL8MpoDxW0uAYsV8Ctt1/+Ux+BObMADIml/1HPQyBRA0qhorQQWk0TcbjEXeIShs
OOG8byAQUJx98z62zrKQ53+Pxdevcja6uKxu3f0yEHxl19dBJdT2BM6rjs3sO1hi
c3tX/U8o39H0Kg==
=mZwx
-----END PGP SIGNATURE-----
Merge tag 'riscv-for-linus-5.2-mw2' of git://git.kernel.org/pub/scm/linux/kernel/git/palmer/riscv-linux
Pull RISC-V updates from Palmer Dabbelt:
"This contains an assortment of RISC-V related patches that I'd like to
target for the 5.2 merge window. Most of the patches are cleanups, but
there are a handful of user-visible changes:
- The nosmp and nr_cpus command-line arguments are now supported,
which work like normal.
- The SBI console no longer installs itself as a preferred console,
we rely on standard mechanisms (/chosen, command-line, hueristics)
instead.
- sfence_remove_sfence_vma{,_asid} now pass their arguments along to
the SBI call.
- Modules now support BUG().
- A missing sfence.vma during boot has been added. This bug only
manifests during boot.
- The arch/riscv support for SiFive's L2 cache controller has been
merged, which should un-block the EDAC framework work.
I've only tested this on QEMU again, as I didn't have time to get
things running on the Unleashed. The latest master from this morning
merges in cleanly and passes the tests as well"
* tag 'riscv-for-linus-5.2-mw2' of git://git.kernel.org/pub/scm/linux/kernel/git/palmer/riscv-linux: (31 commits)
riscv: fix locking violation in page fault handler
RISC-V: sifive_l2_cache: Add L2 cache controller driver for SiFive SoCs
RISC-V: Add DT documentation for SiFive L2 Cache Controller
RISC-V: Avoid using invalid intermediate translations
riscv: Support BUG() in kernel module
riscv: Add the support for c.ebreak check in is_valid_bugaddr()
riscv: support trap-based WARN()
riscv: fix sbi_remote_sfence_vma{,_asid}.
riscv: move switch_mm to its own file
riscv: move flush_icache_{all,mm} to cacheflush.c
tty: Don't force RISCV SBI console as preferred console
RISC-V: Access CSRs using CSR numbers
RISC-V: Add interrupt related SCAUSE defines in asm/csr.h
RISC-V: Use tabs to align macro values in asm/csr.h
RISC-V: Fix minor checkpatch issues.
RISC-V: Support nr_cpus command line option.
RISC-V: Implement nosmp commandline option.
RISC-V: Add RISC-V specific arch_match_cpu_phys_id
riscv: vdso: drop unnecessary cc-ldoption
riscv: call pm_power_off from machine_halt / machine_power_off
...
These generic-y defines do not have the corresponding generic header
in include/asm-generic/, so they are definitely invalid.
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
The driver currently supports only SiFive FU540-C000 platform.
The initial version of L2 cache controller driver includes:
- Initial configuration reporting at boot up.
- Support for ECC related functionality.
Signed-off-by: Yash Shah <yash.shah@sifive.com>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
The macro __BUG_INSN currently is defined as the "ebreak" opcode.
The is_valid_bugaddr() function compares the instruction pointed to by
$sepc with macro __BUG_INSN to check whether the current trap exception
is caused by an "ebreak" instruction. However, this check flow is possibly
erroneous because if C extension is supported, the expected trap
instruction "ebreak" is possibly translated to "c.ebreak" by the assembler.
Therefore, it requires a mechanism to distinguish the length of the
instruction in $spec and compare it to the correct trap instruction.
Signed-off-by: Vincent Chen <vincentc@andestech.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
The WARN() related function will trigger a debug exception. This can help
developers to analyze the cause of WARN() because if the debugger is
connected, the control flow will be transferred to debugging
environment.
Signed-off-by: Vincent Chen <vincentc@andestech.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
Currently sbi_remote_sfence_vma{,_asid} does not pass their arguments
to SBI at all, which is semantically incorrect.
Neither BBL nor OpenSBI is using these arguments at the moment, and
they just do a global flush instead. However we still need to provide
correct arguments.
Signed-off-by: Gary Guo <gary@garyguo.net>
Reviewed-by: Anup Patel <anup@brainfault.org>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
switch_mm is an expensive operations that has two users.
flush_icache_deferred is only called within switch_mm and can be moved
together. The function is expected to be more complicated when ASID
support is added, so clean up eagerly.
By moving them to a separate file we also removes some excessive
dependency of tlbflush.h and cacheflush.h.
Signed-off-by: Gary Guo <gary@garyguo.net>
Reviewed-by: Anup Patel <anup@brainfault.org>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
Currently, flush_icache_all is macro-expanded into a SBI call, yet no
asm/sbi.h is included in asm/cacheflush.h. This could be moved to
mm/cacheflush.c instead (SBI call will dominate performance-wise and
there is no worry to not have it inlined.
Currently, flush_icache_mm stays in kernel/smp.c, which looks like a
hack to prevent it from being compiled when CONFIG_SMP=n. It should
also be in mm/cacheflush.c.
Signed-off-by: Gary Guo <gary@garyguo.net>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
We should prefer accessing CSRs using their CSR numbers because:
1. It compiles fine with older toolchains.
2. We can use latest CSR names in #define macro names of CSR numbers
as-per RISC-V spec.
3. We can access newly added CSRs even if toolchain does not recognize
newly addes CSRs by name.
Signed-off-by: Anup Patel <anup.patel@wdc.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
This patch adds SCAUSE interrupt flag and SCAUSE interrupt related
defines to asm/csr.h. We also use these defines in kernel/irq.c and
express SIE/SIP flags in-terms of SCAUSE interrupt causes.
Signed-off-by: Anup Patel <anup.patel@wdc.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
The spacing between macro name and value is not consistent in
asm/csr.h. This patch beautifies asm/csr.h by using tabs to align
macro values instead of spaces.
Signed-off-by: Anup Patel <anup.patel@wdc.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
-----BEGIN PGP SIGNATURE-----
iQJIBAABCAAyFiEES0KozwfymdVUl37v6iDy2pc3iXMFAlzRrzoUHHBhdWxAcGF1
bC1tb29yZS5jb20ACgkQ6iDy2pc3iXNc7hAApgsi+3Jf9i29mgrKdrTciZ35TegK
C8pTlOIndpBcmdwDakR50/PgfMHdHll8M9TReVNEjbe0S+Ww5GTE7eWtL3YqoPC2
MuXEqcriz6UNi5Xma6vCZrDznWLXkXnzMDoDoYGDSoKuUYxef0fuqxDBnERM60Ht
s52+0XvR5ZseBw7I1KIv/ix2fXuCGq6eCdqassm0rvLPQ7bq6nWzFAlNXOLud303
DjIWu6Op2EL0+fJSmG+9Z76zFjyEbhMIhw5OPDeH4eO3pxX29AIv0m0JlI7ZXxfc
/VVC3r5G4WrsWxwKMstOokbmsQxZ5pB3ZaceYpco7U+9N2e3SlpsNM9TV+Y/0ac/
ynhYa//GK195LpMXx1BmWmLpjBHNgL8MvQkVTIpDia0GT+5sX7+haDxNLGYbocmw
A/mR+KM2jAU3QzNseGh6c659j3K4tbMIFMNxt7pUBxVPLafcccNngFGTpzCwu5GU
b7y4d21g6g/3Irj14NYU/qS8dTjW0rYrCMDquTpxmMfZ2xYuSvQmnBw91NQzVBp2
98L2/fsUG3yOa5MApgv+ryJySsIM+SW+7leKS5tjy/IJINzyPEZ85l3o8ck8X4eT
nohpKc/ELmeyi3omFYq18ecvFf2YRS5jRnz89i9q65/3ESgGiC0wyGOhNTvjvsyv
k4jT0slIK614aGk=
=p8Fp
-----END PGP SIGNATURE-----
Merge tag 'audit-pr-20190507' of git://git.kernel.org/pub/scm/linux/kernel/git/pcmoore/audit
Pull audit updates from Paul Moore:
"We've got a reasonably broad set of audit patches for the v5.2 merge
window, the highlights are below:
- The biggest change, and the source of all the arch/* changes, is
the patchset from Dmitry to help enable some of the work he is
doing around PTRACE_GET_SYSCALL_INFO.
To be honest, including this in the audit tree is a bit of a
stretch, but it does help move audit a little further along towards
proper syscall auditing for all arches, and everyone else seemed to
agree that audit was a "good" spot for this to land (or maybe they
just didn't want to merge it? dunno.).
- We can now audit time/NTP adjustments.
- We continue the work to connect associated audit records into a
single event"
* tag 'audit-pr-20190507' of git://git.kernel.org/pub/scm/linux/kernel/git/pcmoore/audit: (21 commits)
audit: fix a memory leak bug
ntp: Audit NTP parameters adjustment
timekeeping: Audit clock adjustments
audit: purge unnecessary list_empty calls
audit: link integrity evm_write_xattrs record to syscall event
syscall_get_arch: add "struct task_struct *" argument
unicore32: define syscall_get_arch()
Move EM_UNICORE to uapi/linux/elf-em.h
nios2: define syscall_get_arch()
nds32: define syscall_get_arch()
Move EM_NDS32 to uapi/linux/elf-em.h
m68k: define syscall_get_arch()
hexagon: define syscall_get_arch()
Move EM_HEXAGON to uapi/linux/elf-em.h
h8300: define syscall_get_arch()
c6x: define syscall_get_arch()
arc: define syscall_get_arch()
Move EM_ARCOMPACT and EM_ARCV2 to uapi/linux/elf-em.h
audit: Make audit_log_cap and audit_copy_inode static
audit: connect LOGIN record to its syscall record
...
Remove mmiowb() from the kernel memory barrier API and instead, for
architectures that need it, hide the barrier inside spin_unlock() when
MMIO has been performed inside the critical section.
-----BEGIN PGP SIGNATURE-----
iQEzBAABCgAdFiEEPxTL6PPUbjXGY88ct6xw3ITBYzQFAlzMFaUACgkQt6xw3ITB
YzRICQgAiv7wF/yIbBhDOmCNCAKDO59chvFQWxXWdGk/aAB56kwKAMXJgLOvlMG/
VRuuLyParTFQETC3jaxKgnO/1hb+PZLDt2Q2KqixtjIzBypKUPWvK2sf6THhSRF1
GK0DBVUd1rCrWrR815+SPb8el4xXtdBzvAVB+Fx35PXVNpdRdqCkK+EQ6UnXGokm
rXXHbnfsnquBDtmb4CR4r2beH+aNElXbdt0Kj8VcE5J7f7jTdW3z6Q9WFRvdKmK7
yrsxXXB2w/EsWXOwFp0SLTV5+fgeGgTvv8uLjDw+SG6t0E0PebxjNAflT7dPrbYL
WecjKC9WqBxrGY+4ew6YJP70ijLBCw==
=aC8m
-----END PGP SIGNATURE-----
Merge tag 'arm64-mmiowb' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
Pull mmiowb removal from Will Deacon:
"Remove Mysterious Macro Intended to Obscure Weird Behaviours (mmiowb())
Remove mmiowb() from the kernel memory barrier API and instead, for
architectures that need it, hide the barrier inside spin_unlock() when
MMIO has been performed inside the critical section.
The only relatively recent changes have been addressing review
comments on the documentation, which is in a much better shape thanks
to the efforts of Ben and Ingo.
I was initially planning to split this into two pull requests so that
you could run the coccinelle script yourself, however it's been plain
sailing in linux-next so I've just included the whole lot here to keep
things simple"
* tag 'arm64-mmiowb' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (23 commits)
docs/memory-barriers.txt: Update I/O section to be clearer about CPU vs thread
docs/memory-barriers.txt: Fix style, spacing and grammar in I/O section
arch: Remove dummy mmiowb() definitions from arch code
net/ethernet/silan/sc92031: Remove stale comment about mmiowb()
i40iw: Redefine i40iw_mmiowb() to do nothing
scsi/qla1280: Remove stale comment about mmiowb()
drivers: Remove explicit invocations of mmiowb()
drivers: Remove useless trailing comments from mmiowb() invocations
Documentation: Kill all references to mmiowb()
riscv/mmiowb: Hook up mmwiob() implementation to asm-generic code
powerpc/mmiowb: Hook up mmwiob() implementation to asm-generic code
ia64/mmiowb: Add unconditional mmiowb() to arch_spin_unlock()
mips/mmiowb: Add unconditional mmiowb() to arch_spin_unlock()
sh/mmiowb: Add unconditional mmiowb() to arch_spin_unlock()
m68k/io: Remove useless definition of mmiowb()
nds32/io: Remove useless definition of mmiowb()
x86/io: Remove useless definition of mmiowb()
arm64/io: Remove useless definition of mmiowb()
ARM/io: Remove useless definition of mmiowb()
mmiowb: Hook up mmiowb helpers to spinlocks and generic I/O accessors
...
Pull unified TLB flushing from Ingo Molnar:
"This contains the generic mmu_gather feature from Peter Zijlstra,
which is an all-arch unification of TLB flushing APIs, via the
following (broad) steps:
- enhance the <asm-generic/tlb.h> APIs to cover more arch details
- convert most TLB flushing arch implementations to the generic
<asm-generic/tlb.h> APIs.
- remove leftovers of per arch implementations
After this series every single architecture makes use of the unified
TLB flushing APIs"
* 'core-mm-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
mm/resource: Use resource_overlaps() to simplify region_intersects()
ia64/tlb: Eradicate tlb_migrate_finish() callback
asm-generic/tlb: Remove tlb_table_flush()
asm-generic/tlb: Remove tlb_flush_mmu_free()
asm-generic/tlb: Remove CONFIG_HAVE_GENERIC_MMU_GATHER
asm-generic/tlb: Remove arch_tlb*_mmu()
s390/tlb: Convert to generic mmu_gather
asm-generic/tlb: Introduce CONFIG_HAVE_MMU_GATHER_NO_GATHER=y
arch/tlb: Clean up simple architectures
um/tlb: Convert to generic mmu_gather
sh/tlb: Convert SH to generic mmu_gather
ia64/tlb: Convert to generic mmu_gather
arm/tlb: Convert to generic mmu_gather
asm-generic/tlb, arch: Invert CONFIG_HAVE_RCU_TABLE_INVALIDATE
asm-generic/tlb, ia64: Conditionally provide tlb_migrate_finish()
asm-generic/tlb: Provide generic tlb_flush() based on flush_tlb_mm()
asm-generic/tlb, arch: Provide generic tlb_flush() based on flush_tlb_range()
asm-generic/tlb, arch: Provide generic VIPT cache flush
asm-generic/tlb, arch: Provide CONFIG_HAVE_MMU_GATHER_PAGE_SIZE
asm-generic/tlb: Provide a comment
This option is always enabled, and not supporting the A extensions would
create a complete ABI trainwreck, so there is no point in even slightly
encouraging such an idea by keeping this unselectable code around.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
This matches what other heavily used architectures do, and will allow us
to easily use <asm-generic/uaccess.h> for the nommu case.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
In a bid to kill off explicit mmiowb() usage in driver code, hook up
the asm-generic mmiowb() tracking code for riscv, so that an mmiowb()
is automatically issued from spin_unlock() if an I/O write was performed
in the critical section.
Reviewed-by: Palmer Dabbelt <palmer@sifive.com>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Hook up asm-generic/mmiowb.h to Kbuild for all architectures so that we
can subsequently include asm/mmiowb.h from core code.
Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
implementation in x86 was horrible and gcc certainly gets it wrong. He
said that since the tracepoints only pass in 0 and 6 for i and n repectively,
it should be optimized for that case. Inspecting the kernel, I discovered
that all users pass in 0 for i and only one file passing in something other
than 6 for the number of arguments. That code happens to be my own code used
for the special syscall tracing. That can easily be converted to just
using 0 and 6 as well, and only copying what is needed. Which is probably
the faster path anyway for that case.
Along the way, a couple of real fixes came from this as the
syscall_get_arguments() function was incorrect for csky and riscv.
x86 has been optimized to for the new interface that removes the variable
number of arguments, but the other architectures could still use some
loving and take more advantage of the simpler interface.
-----BEGIN PGP SIGNATURE-----
iIoEABYIADIWIQRRSw7ePDh/lE+zeZMp5XQQmuv6qgUCXKdi7RQccm9zdGVkdEBn
b29kbWlzLm9yZwAKCRAp5XQQmuv6qjtiAQDaZbFaSgEbs99jjuAPDSZ0li8dyUOC
3KS5TyuLw+fEaAD/QZnKjplVFAfA5FxrABZ0ioIKDON4nLyESEb+xCv0gA4=
=dTuo
-----END PGP SIGNATURE-----
Merge tag 'trace-5.1-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace
Pull syscall-get-arguments cleanup and fixes from Steven Rostedt:
"Andy Lutomirski approached me to tell me that the
syscall_get_arguments() implementation in x86 was horrible and gcc
certainly gets it wrong.
He said that since the tracepoints only pass in 0 and 6 for i and n
repectively, it should be optimized for that case. Inspecting the
kernel, I discovered that all users pass in 0 for i and only one file
passing in something other than 6 for the number of arguments. That
code happens to be my own code used for the special syscall tracing.
That can easily be converted to just using 0 and 6 as well, and only
copying what is needed. Which is probably the faster path anyway for
that case.
Along the way, a couple of real fixes came from this as the
syscall_get_arguments() function was incorrect for csky and riscv.
x86 has been optimized to for the new interface that removes the
variable number of arguments, but the other architectures could still
use some loving and take more advantage of the simpler interface"
* tag 'trace-5.1-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace:
syscalls: Remove start and number from syscall_set_arguments() args
syscalls: Remove start and number from syscall_get_arguments() args
csky: Fix syscall_get_arguments() and syscall_set_arguments()
riscv: Fix syscall_get_arguments() and syscall_set_arguments()
tracing/syscalls: Pass in hardcoded 6 into syscall_get_arguments()
ptrace: Remove maxargs from task_current_syscall()
RISC-V syscall arguments are located in orig_a0,a1..a5 fields
of struct pt_regs.
Due to an off-by-one bug and a bug in pointer arithmetic
syscall_get_arguments() was reading s3..s7 fields instead of a1..a5.
Likewise, syscall_set_arguments() was writing s3..s7 fields
instead of a1..a5.
Link: http://lkml.kernel.org/r/20190329171221.GA32456@altlinux.org
Fixes: e2c0cdfba7 ("RISC-V: User-facing API")
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Will Drewry <wad@chromium.org>
Cc: Albert Ou <aou@eecs.berkeley.edu>
Cc: linux-riscv@lists.infradead.org
Cc: stable@vger.kernel.org # v4.15+
Acked-by: Palmer Dabbelt <palmer@sifive.com>
Signed-off-by: Dmitry V. Levin <ldv@altlinux.org>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Provide a generic tlb_flush() implementation that relies on
flush_tlb_range(). This is a little awkward because flush_tlb_range()
assumes a VMA for range invalidation, but we no longer have one.
Audit of all flush_tlb_range() implementations shows only vma->vm_mm
and vma->vm_flags are used, and of the latter only VM_EXEC (I-TLB
invalidates) and VM_HUGETLB (large TLB invalidate) are used.
Therefore, track VM_EXEC and VM_HUGETLB in two more bits, and create a
'fake' VMA.
This allows architectures that have a reasonably efficient
flush_tlb_range() to not require any additional effort.
No change in behavior intended.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Nick Piggin <npiggin@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@surriel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
A memory save operation to 8-byte variable in RV32 is divided into
two sw instructions in the put_user macro. The current fixup returns
execution flow to the second sw instead of the one after it.
This patch fixes this fixup code according to the load access part.
Signed-off-by: Alan Kao<alankao@andestech.com>
Cc: Greentime Hu <greentime@andestech.com>
Cc: Vincent Chen <deanbo422@gmail.com>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
- Pseudo NMI support for arm64 using GICv3 interrupt priorities
- uaccess macros clean-up (unsafe user accessors also merged but
reverted, waiting for objtool support on arm64)
- ptrace regsets for Pointer Authentication (ARMv8.3) key management
- inX() ordering w.r.t. delay() on arm64 and riscv (acks in place by the
riscv maintainers)
- arm64/perf updates: PMU bindings converted to json-schema, unused
variable and misleading comment removed
- arm64/debug fixes to ensure checking of the triggering exception level
and to avoid the propagation of the UNKNOWN FAR value into the si_code
for debug signals
- Workaround for Fujitsu A64FX erratum 010001
- lib/raid6 ARM NEON optimisations
- NR_CPUS now defaults to 256 on arm64
- Minor clean-ups (documentation/comments, Kconfig warning, unused
asm-offsets, clang warnings)
- MAINTAINERS update for list information to the ARM64 ACPI entry
-----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEE5RElWfyWxS+3PLO2a9axLQDIXvEFAlyCl0cACgkQa9axLQDI
XvEyKxAAiogBZLbyhcy8bTUHVzVoJE0FyAkdO2wWnnaff2Ohkhy1Y/npv33IeK2q
RknxqDIx2DUUVPJNRZGoI/WwBtTZdKaAnW4rIKG84yC1eAkFcd96WQasaZzcp1qY
HmvbJiYXM0bh+0J7i3Wgry/QzOkrltJFJW2kp6Wd5aFE+R1WyWyxT6d+Fp0J3vlA
bT70jlpBK6LXEOmmBS+04Ml02+8MvaGxIl8EInBHSfDLRLErj5E8n41rRHKUiSWz
maWI+kVoLYwOE68xiZlDftUBEeQpUSWgg2nxeK+640QSl1wJmVcRcY9nm6TZeMG2
AiZTR9a7cP5rrdSN5suUmb7d4AMMVlVMisGDlwb+9oCxeTRDzg0uwACaVgHfPqQr
UeBdHbL9nStN7uBH23H8L9mKk+tqpFmk0sgzdrKejOwysAiqWV8aazb/Na3qnVRl
J1B5opxMnGOsjXmHvtG/tiZl281Uwz5ZmzfLmIY3gUZgUgdA3511Egp0ry5y1dzJ
SkYC4Hmzb2ybQvXGIDDa3OzCwXXiqyqKsO+O8Egg1k4OIwbp3w+NHE7gKeA+dMgD
gjN7zEalCUi46Q28xiCPEb+88BpQ18czIWGQLb9mAnmYeZPjqqenXKXuRHr4lgVe
jPURJ/vqvFEglZJN1RDuQHKzHEcm5f2XE566sMZYdSoeiUCb0QM=
=2U56
-----END PGP SIGNATURE-----
Merge tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
Pull arm64 updates from Catalin Marinas:
- Pseudo NMI support for arm64 using GICv3 interrupt priorities
- uaccess macros clean-up (unsafe user accessors also merged but
reverted, waiting for objtool support on arm64)
- ptrace regsets for Pointer Authentication (ARMv8.3) key management
- inX() ordering w.r.t. delay() on arm64 and riscv (acks in place by
the riscv maintainers)
- arm64/perf updates: PMU bindings converted to json-schema, unused
variable and misleading comment removed
- arm64/debug fixes to ensure checking of the triggering exception
level and to avoid the propagation of the UNKNOWN FAR value into the
si_code for debug signals
- Workaround for Fujitsu A64FX erratum 010001
- lib/raid6 ARM NEON optimisations
- NR_CPUS now defaults to 256 on arm64
- Minor clean-ups (documentation/comments, Kconfig warning, unused
asm-offsets, clang warnings)
- MAINTAINERS update for list information to the ARM64 ACPI entry
* tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (54 commits)
arm64: mmu: drop paging_init comments
arm64: debug: Ensure debug handlers check triggering exception level
arm64: debug: Don't propagate UNKNOWN FAR into si_code for debug signals
Revert "arm64: uaccess: Implement unsafe accessors"
arm64: avoid clang warning about self-assignment
arm64: Kconfig.platforms: fix warning unmet direct dependencies
lib/raid6: arm: optimize away a mask operation in NEON recovery routine
lib/raid6: use vdupq_n_u8 to avoid endianness warnings
arm64: io: Hook up __io_par() for inX() ordering
riscv: io: Update __io_[p]ar() macros to take an argument
asm-generic/io: Pass result of I/O accessor to __io_[p]ar()
arm64: Add workaround for Fujitsu A64FX erratum 010001
arm64: Rename get_thread_info()
arm64: Remove documentation about TIF_USEDFPU
arm64: irqflags: Fix clang build warnings
arm64: Enable the support of pseudo-NMIs
arm64: Skip irqflags tracing for NMI in IRQs disabled context
arm64: Skip preemption when exiting an NMI
arm64: Handle serror in NMI context
irqchip/gic-v3: Allow interrupts to be set as pseudo-NMI
...
This contains the vast majority of the RISC-V patches for this merge
window. It includes:
* A handful of cleanups to our kernel prints, most of which are things I
should have caught the first time.
* We now provide an HWCAP that contains the ISA extensions that all
enabled processors support, as supposed to just looking at the first
enabled processor.
* We no longer spin forever waiting for all harts to boot.
* A fixmap implementation, which is coupled to some cleanups in our MM
code.
The only outstanding patches I know of right now are Vincent Chen's
patches to fix c.ebreak handling in the kernel, the v2 of which was
posted this morning. I'd like those in the MW, but I didn't want to
hold up everything else. The patch set is based on top of my last fixes
submission, but I've tested it with a conflict-free merge from v5.0.
I'm doing this rather than my "just go rebase everything" flow due to a
discussion with Linus, but if I misunderstood then just let me know and
I'll do something else. It's also the first time I've taken a PR into
my own tree, so let me know if I screwed that one up.
I've used my standard testing flow (QEMU in Fedora), but now that we're
starting to get the kernel in better shape I think it's time to impose
some more testing here -- specifically I'm going to require that patches
boot on the HiFive Unleashed because we're getting to the point where we
can actually expect that to work. I haven't done that for this tag, but
I'm going to do it for future ones.
I know the board is a bit expensive and not everyone has one, but if
I've sent you a free one and your patches break the boot then I'm going
to yell at you :). If you don't have one then please indicate how you
tested in your cover letter, and if you have a board then please add
your Tested-by to patches if they work for your testing flow.
-----BEGIN PGP SIGNATURE-----
iQJHBAABCAAxFiEEAM520YNJYN/OiG3470yhUCzLq0EFAlx+ytITHHBhbG1lckBk
YWJiZWx0LmNvbQAKCRDvTKFQLMurQclWD/9+0TBchTSDEpEMPYWrTf5Z0s/mUDfh
0atZmFeu7MaBiPwu3kqw739TNqsQG2e8erPRFVGbPz8tlpY7mS+2xM1/+AmCTYgP
0k4moaO/YWkXq8nNOmxvo+o5afpftPPJ22Tc29ougnZpDM1PpM90QPQQoPaTzhGy
pHp4rez5MW+uNv1s0NTUREDCKn2fa1A9zlW9K2mvQwA+ysf/BwDPsqwG+h8hsSzf
jlWGj+hzLOk4SRgwVDFpsisa8JdhmRSa/MJvTyU9Fjr8WDQBcCjQz3D95mOt3LGs
AdbLtcBUUD+0Q5Cd5CKacgQmJ6aUinjen7/Z5g3AiKEodpmJhAVy9QcQLnJ43BIM
MchW53C6oDLJ8PVl3745LyN6b2mL+QbjJiaF4GxX7cUPz3gumUP3UCTssNG3LvRd
LgMmeGSvCt8liXM8FYns7//Uc2cNUvxHAYk4kcIxe5C+KtxA/7wdYO9G3Odp1Pty
+FQc4S16R8tR/8FblYz0BW377hOeC3lruK25WXWEjefiLaWPu520SVttgOXR8SBJ
FWDkyaDxFHaoL+lmZdSAe3fT9PWHKMIOmDX2Y9BzF2A63a5ZixUYrbovThgrmBKr
09J89p+mAZlMNiivwZHuZjKFibsQvZrjbbAhAF3szaj8E4dLzqIL7bHH57T3B/Fp
6iqoYWodq64bEQ==
=6gjG
-----END PGP SIGNATURE-----
Merge tag 'riscv-for-linus-5.1-mw0' of git://git.kernel.org/pub/scm/linux/kernel/git/palmer/riscv-linux
Pull RISC-V updates from Palmer Dabbelt:
"This contains the vast majority of the RISC-V patches for this merge
window. It includes:
- A handful of cleanups to our kernel prints, most of which are
things I should have caught the first time.
- We now provide an HWCAP that contains the ISA extensions that all
enabled processors support, as supposed to just looking at the
first enabled processor.
- We no longer spin forever waiting for all harts to boot.
- A fixmap implementation, which is coupled to some cleanups in our
MM code.
The only outstanding patches I know of right now are Vincent Chen's
patches to fix c.ebreak handling in the kernel, the v2 of which was
posted this morning. I'd like those in the MW, but I didn't want to
hold up everything else. The patch set is based on top of my last
fixes submission, but I've tested it with a conflict-free merge from
v5.0. I'm doing this rather than my "just go rebase everything" flow
due to a discussion with Linus, but if I misunderstood then just let
me know and I'll do something else. It's also the first time I've
taken a PR into my own tree, so let me know if I screwed that one up.
I've used my standard testing flow (QEMU in Fedora), but now that
we're starting to get the kernel in better shape I think it's time to
impose some more testing here -- specifically I'm going to require
that patches boot on the HiFive Unleashed because we're getting to the
point where we can actually expect that to work. I haven't done that
for this tag, but I'm going to do it for future ones.
I know the board is a bit expensive and not everyone has one, but if
I've sent you a free one and your patches break the boot then I'm
going to yell at you :). If you don't have one then please indicate
how you tested in your cover letter, and if you have a board then
please add your Tested-by to patches if they work for your testing
flow"
* tag 'riscv-for-linus-5.1-mw0' of git://git.kernel.org/pub/scm/linux/kernel/git/palmer/riscv-linux:
arch: riscv: fix logic error in parse_dtb
RISC-V: Assign hwcap as per comman capabilities.
RISC-V: Compare cpuid with NR_CPUS before mapping.
RISC-V: Allow hartid-to-cpuid function to fail.
RISC-V: Remove NR_CPUs check during hartid search from DT
RISC-V: Move cpuid to hartid mapping to SMP.
RISC-V: Do not wait indefinitely in __cpu_up
RISC-V: Free-up initrd in free_initrd_mem()
RISC-V: Implement compile-time fixed mappings
RISC-V: Move setup_vm() to mm/init.c
RISC-V: Move setup_bootmem() to mm/init.c
RISC-V: Setup init_mm before parse_early_param()
riscv: remove the HAVE_KPROBES option
riscv: use for_each_of_cpu_node iterator
riscv: treat cpu devicetree nodes without status as enabled
riscv: fix riscv_of_processor_hartid() comment
riscv: use pr_info and friends
riscv: add missing newlines to printk messages
This patchset does:
1. Moves MM related code from kernel/setup.c to mm/init.c
2. Implements compile-time fixed mappings
Using fixed mappings, we get earlyprints even without SBI calls.
For example, we can now use kernel parameter
"earlycon=uart8250,mmio,0x10000000"
to get early prints on QEMU virt machine without using SBI calls.
The patchset is tested on QEMU virt machine.
Palmer: It looks like some of the code movement here conflicted with the
patches to move hartid handling around. As far as I can tell the only
changed code was in smp_setup_processor_id(), and I've kept the one in
smp.c.
Every in-kernel use of this function defined it to KERNEL_DS (either as
an actual define, or as an inline function). It's an entirely
historical artifact, and long long long ago used to actually read the
segment selector valueof '%ds' on x86.
Which in the kernel is always KERNEL_DS.
Inspired by a patch from Jann Horn that just did this for a very small
subset of users (the ones in fs/), along with Al who suggested a script.
I then just took it to the logical extreme and removed all the remaining
gunk.
Roughly scripted with
git grep -l '(get_ds())' -- :^tools/ | xargs sed -i 's/(get_ds())/(KERNEL_DS)/'
git grep -lw 'get_ds' -- :^tools/ | xargs sed -i '/^#define get_ds()/d'
plus manual fixups to remove a few unusual usage patterns, the couple of
inline function cases and to fix up a comment that had become stale.
The 'get_ds()' function remains in an x86 kvm selftest, since in user
space it actually does something relevant.
Inspired-by: Jann Horn <jannh@google.com>
Inspired-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Currently, logical CPU id to physical hartid mapping is defined for both
smp and non-smp configurations. This is not required as we need this
only for smp configuration. The mapping function can define directly
boot_cpu_hartid for non-smp use case.
The reverse mapping function i.e. hartid to cpuid can be called for any
valid but not booted harts. So it should return default cpu 0 only if it
is a boot hartid.
Signed-off-by: Atish Patra <atish.patra@wdc.com>
Reviewed-by: Anup Patel <anup@brainfault.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
The definitions of the __io_[p]ar() macros in asm-generic/io.h take the
value returned by the preceding I/O read as an argument so that
architectures can use this to create order with a subsequent delayX()
routine using a dependency.
Update the riscv barrier definitions to match, although the argument
is currently unused.
Suggested-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Palmer Dabbelt <palmer@sifive.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This patch implements compile-time virtual to physical mappings. These
compile-time fixed mappings can be used by earlycon, ACPI, and early
ioremap for creating fixed mappings when FIX_EARLYCON_MEM=y.
To start with, we have enabled compile-time fixed mappings for earlycon.
Signed-off-by: Anup Patel <anup.patel@wdc.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Palmer Dabbelt <palmer@sifive.com>
The setup_bootmem() mainly populates memblocks and does early memory
reservations. The right location for this function is mm/init.c. It
calls setup_initrd() so we move that as well.
Signed-off-by: Anup Patel <anup.patel@wdc.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Palmer Dabbelt <palmer@sifive.com>
Reviewed-by: Mike Rapoport <rppt@linux.ibm.com>
Previously, invalid PTEs and swap PTEs had the same binary
representation, causing errors when attempting to unmap PROT_NONE
mappings, including implicit unmap on exit.
Typical error:
swap_info_get: Bad swap file entry 40000000007a9879
BUG: Bad page map in process a.out pte:3d4c3cc0 pmd:3e521401
Cc: stable@vger.kernel.org
Signed-off-by: Stefan O'Rear <sorear2@gmail.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
This ratio is the most used among all other architectures and make
icache_hygiene libhugetlbfs test pass: this test mmap lots of
hugepages whose addresses, without this patch, reach the end of
the process user address space.
Signed-off-by: Alexandre Ghiti <aghiti@upmem.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
This is sort of a mix between a new feature and a bug fix. I've managed
to screw up merging this patch set a handful of times but I think it's
OK this time around. The main new feature here is audit support for
RISC-V, with some fixes to audit-related bugs that cropped up along the
way:
* The addition of NR_syscalls into unistd.h, which is necessary for
CONFIG_FTRACE_SYSCALLS.
* The definition of CREATE_TRACE_POINTS so
__tracepoint_sys_{enter,exit} get defined.
* A fix for trace_sys_exit() so we can enable
CONFIG_HAVE_SYSCALL_TRACEPOINTS.
This macro is used by kernel/trace/{trace.h,trace_syscalls.c} if we
have CONFIG_FTRACE_SYSCALLS enabled.
Signed-off-by: David Abdurachmanov <david.abdurachmanov@gmail.com>
Fixes: b78002b395b4 ("riscv: add HAVE_SYSCALL_TRACEPOINTS to Kconfig")
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
On RISC-V (riscv) audit is supported through generic lib/audit.c.
The patch adds required arch specific definitions.
Signed-off-by: David Abdurachmanov <david.abdurachmanov@gmail.com>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
This patch supports dynamic generate got and plt sections mechanism on
rv32. It contains the modification as follows:
- Always enable MODULE_SECTIONS (both rv64 and rv32)
- Change the fixed size type.
This patch had been tested by following modules:
btrfs 6795991 0 - Live 0xa544b000
test_static_keys 17304 0 - Live 0xa28be000
zstd_compress 1198986 1 btrfs, Live 0xa2a25000
zstd_decompress 608112 1 btrfs, Live 0xa24e7000
lzo 8787 0 - Live 0xa2049000
xor 27461 1 btrfs, Live 0xa2041000
zram 78849 0 - Live 0xa2276000
netdevsim 55909 0 - Live 0xa202d000
tun 211534 0 - Live 0xa21b5000
fuse 566049 0 - Live 0xa25fb000
nfs_layout_flexfiles 192597 0 - Live 0xa229b000
ramoops 74895 0 - Live 0xa2019000
xfs 3973221 0 - Live 0xa507f000
libcrc32c 3053 2 btrfs,xfs, Live 0xa34af000
lzo_compress 17302 2 btrfs,lzo, Live 0xa347d000
lzo_decompress 7178 2 btrfs,lzo, Live 0xa3451000
raid6_pq 142086 1 btrfs, Live 0xa33a4000
reed_solomon 31022 1 ramoops, Live 0xa31eb000
test_bitmap 3734 0 - Live 0xa31af000
test_bpf 1588736 0 - Live 0xa2c11000
test_kmod 41161 0 - Live 0xa29f8000
test_module 1356 0 - Live 0xa299e000
test_printf 6024 0 [permanent], Live 0xa2971000
test_static_key_base 5797 1 test_static_keys, Live 0xa2931000
test_user_copy 4382 0 - Live 0xa28c9000
xxhash 70501 2 zstd_compress,zstd_decompress, Live 0xa2055000
Signed-off-by: Zong Li <zong@andestech.com>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
This commit removes redundant generic-y defines in
arch/riscv/include/asm/Kbuild.
[1] It is redundant to define the same generic-y in both
arch/$(ARCH)/include/asm/Kbuild and
arch/$(ARCH)/include/uapi/asm/Kbuild.
Remove the following generic-y:
errno.h
fcntl.h
ioctl.h
ioctls.h
ipcbuf.h
mman.h
msgbuf.h
param.h
poll.h
posix_types.h
resource.h
sembuf.h
setup.h
shmbuf.h
signal.h
socket.h
sockios.h
stat.h
statfs.h
swab.h
termbits.h
termios.h
types.h
[2] It is redundant to define generic-y when arch-specific
implementation exists in arch/$(ARCH)/include/asm/*.h
Remove the following generic-y:
cacheflush.h
module.h
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Merge more updates from Andrew Morton:
- procfs updates
- various misc bits
- lib/ updates
- epoll updates
- autofs
- fatfs
- a few more MM bits
* emailed patches from Andrew Morton <akpm@linux-foundation.org>: (58 commits)
mm/page_io.c: fix polled swap page in
checkpatch: add Co-developed-by to signature tags
docs: fix Co-Developed-by docs
drivers/base/platform.c: kmemleak ignore a known leak
fs: don't open code lru_to_page()
fs/: remove caller signal_pending branch predictions
mm/: remove caller signal_pending branch predictions
arch/arc/mm/fault.c: remove caller signal_pending_branch predictions
kernel/sched/: remove caller signal_pending branch predictions
kernel/locking/mutex.c: remove caller signal_pending branch predictions
mm: select HAVE_MOVE_PMD on x86 for faster mremap
mm: speed up mremap by 20x on large regions
mm: treewide: remove unused address argument from pte_alloc functions
initramfs: cleanup incomplete rootfs
scripts/gdb: fix lx-version string output
kernel/kcov.c: mark write_comp_data() as notrace
kernel/sysctl: add panic_print into sysctl
panic: add options to print system info when panic happens
bfs: extra sanity checking and static inode bitmap
exec: separate MM_ANONPAGES and RLIMIT_STACK accounting
...
Patch series "Add support for fast mremap".
This series speeds up the mremap(2) syscall by copying page tables at
the PMD level even for non-THP systems. There is concern that the extra
'address' argument that mremap passes to pte_alloc may do something
subtle architecture related in the future that may make the scheme not
work. Also we find that there is no point in passing the 'address' to
pte_alloc since its unused. This patch therefore removes this argument
tree-wide resulting in a nice negative diff as well. Also ensuring
along the way that the enabled architectures do not do anything funky
with the 'address' argument that goes unnoticed by the optimization.
Build and boot tested on x86-64. Build tested on arm64. The config
enablement patch for arm64 will be posted in the future after more
testing.
The changes were obtained by applying the following Coccinelle script.
(thanks Julia for answering all Coccinelle questions!).
Following fix ups were done manually:
* Removal of address argument from pte_fragment_alloc
* Removal of pte_alloc_one_fast definitions from m68k and microblaze.
// Options: --include-headers --no-includes
// Note: I split the 'identifier fn' line, so if you are manually
// running it, please unsplit it so it runs for you.
virtual patch
@pte_alloc_func_def depends on patch exists@
identifier E2;
identifier fn =~
"^(__pte_alloc|pte_alloc_one|pte_alloc|__pte_alloc_kernel|pte_alloc_one_kernel)$";
type T2;
@@
fn(...
- , T2 E2
)
{ ... }
@pte_alloc_func_proto_noarg depends on patch exists@
type T1, T2, T3, T4;
identifier fn =~ "^(__pte_alloc|pte_alloc_one|pte_alloc|__pte_alloc_kernel|pte_alloc_one_kernel)$";
@@
(
- T3 fn(T1, T2);
+ T3 fn(T1);
|
- T3 fn(T1, T2, T4);
+ T3 fn(T1, T2);
)
@pte_alloc_func_proto depends on patch exists@
identifier E1, E2, E4;
type T1, T2, T3, T4;
identifier fn =~
"^(__pte_alloc|pte_alloc_one|pte_alloc|__pte_alloc_kernel|pte_alloc_one_kernel)$";
@@
(
- T3 fn(T1 E1, T2 E2);
+ T3 fn(T1 E1);
|
- T3 fn(T1 E1, T2 E2, T4 E4);
+ T3 fn(T1 E1, T2 E2);
)
@pte_alloc_func_call depends on patch exists@
expression E2;
identifier fn =~
"^(__pte_alloc|pte_alloc_one|pte_alloc|__pte_alloc_kernel|pte_alloc_one_kernel)$";
@@
fn(...
-, E2
)
@pte_alloc_macro depends on patch exists@
identifier fn =~
"^(__pte_alloc|pte_alloc_one|pte_alloc|__pte_alloc_kernel|pte_alloc_one_kernel)$";
identifier a, b, c;
expression e;
position p;
@@
(
- #define fn(a, b, c) e
+ #define fn(a, b) e
|
- #define fn(a, b) e
+ #define fn(a) e
)
Link: http://lkml.kernel.org/r/20181108181201.88826-2-joelaf@google.com
Signed-off-by: Joel Fernandes (Google) <joel@joelfernandes.org>
Suggested-by: Kirill A. Shutemov <kirill@shutemov.name>
Acked-by: Kirill A. Shutemov <kirill@shutemov.name>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Julia Lawall <Julia.Lawall@lip6.fr>
Cc: Kirill A. Shutemov <kirill@shutemov.name>
Cc: William Kucharski <william.kucharski@oracle.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Nobody has actually used the type (VERIFY_READ vs VERIFY_WRITE) argument
of the user address range verification function since we got rid of the
old racy i386-only code to walk page tables by hand.
It existed because the original 80386 would not honor the write protect
bit when in kernel mode, so you had to do COW by hand before doing any
user access. But we haven't supported that in a long time, and these
days the 'type' argument is a purely historical artifact.
A discussion about extending 'user_access_begin()' to do the range
checking resulted this patch, because there is no way we're going to
move the old VERIFY_xyz interface to that model. And it's best done at
the end of the merge window when I've done most of my merges, so let's
just get this done once and for all.
This patch was mostly done with a sed-script, with manual fix-ups for
the cases that weren't of the trivial 'access_ok(VERIFY_xyz' form.
There were a couple of notable cases:
- csky still had the old "verify_area()" name as an alias.
- the iter_iov code had magical hardcoded knowledge of the actual
values of VERIFY_{READ,WRITE} (not that they mattered, since nothing
really used it)
- microblaze used the type argument for a debug printout
but other than those oddities this should be a total no-op patch.
I tried to fix up all architectures, did fairly extensive grepping for
access_ok() uses, and the changes are trivial, but I may have missed
something. Any missed conversion should be trivially fixable, though.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
A huge update this time, but a lot of that is just consolidating or
removing code:
- provide a common DMA_MAPPING_ERROR definition and avoid indirect
calls for dma_map_* error checking
- use direct calls for the DMA direct mapping case, avoiding huge
retpoline overhead for high performance workloads
- merge the swiotlb dma_map_ops into dma-direct
- provide a generic remapping DMA consistent allocator for architectures
that have devices that perform DMA that is not cache coherent. Based
on the existing arm64 implementation and also used for csky now.
- improve the dma-debug infrastructure, including dynamic allocation
of entries (Robin Murphy)
- default to providing chaining scatterlist everywhere, with opt-outs
for the few architectures (alpha, parisc, most arm32 variants) that
can't cope with it
- misc sparc32 dma-related cleanups
- remove the dma_mark_clean arch hook used by swiotlb on ia64 and
replace it with the generic noncoherent infrastructure
- fix the return type of dma_set_max_seg_size (Niklas Söderlund)
- move the dummy dma ops for not DMA capable devices from arm64 to
common code (Robin Murphy)
- ensure dma_alloc_coherent returns zeroed memory to avoid kernel data
leaks through userspace. We already did this for most common
architectures, but this ensures we do it everywhere.
dma_zalloc_coherent has been deprecated and can hopefully be
removed after -rc1 with a coccinelle script.
-----BEGIN PGP SIGNATURE-----
iQI/BAABCgApFiEEgdbnc3r/njty3Iq9D55TZVIEUYMFAlwctQgLHGhjaEBsc3Qu
ZGUACgkQD55TZVIEUYMxgQ//dBpAfS4/J76CdAbYry2zqgcOUU9hIrD6NHiEMWov
ltJxyvEl3LsUmIdEj3aCrYL9jZN0qsnCzn5BVj2c3jDIVgD64fAr7HDf/PbEEfKb
j6/GgEnVLPZV+sQMvhNA5jOzHrkseaqPa4/pNLFZ/l8jnuZ2d+btusDWJpMoVDer
TXVwtIfgeIu0gTygYOShLYXd5qptWKWsZEpbTZOO2sE6+x+ZJX7yQYUxYDTlcOIj
JWVO2l5QNHPc5T9o2at+6L5aNUvnZOxT79sWgyZLn0Kc+FagKAVwfLqUEl0v7foG
8k/xca5/8p3afB1DfrIrtplJqis7cVgdyGxriwuuoO8X4F0nPyWwpGmxsBhrWwwl
xTqC4UorEJ7QwoP6Azopk/vYI2QXIUBLjuCJCuFXZj9+2BGf4IfvBY1S2cLM9qLs
HMcxQonuXJii044KEFS96ePEuiT+igVINweIFBKWcgNCEG0UQtyL6RQ1U5297ipF
JiWZAqD+p9X52UdKS+oKfAiZEekMXn6Xyo97+YCiNpfOo0GP5eEcwhL+JpY4AiRq
apPXtsRy2o1s8yfjdraUIM2Mc2n62vFKb35oUbGCd/QO9piPrFQHl6T0HHcHk4YR
XrUXcHieFZBCYqh7ZVa4RL8Msq1wvGuTL4Dxl43mXdsMoUFRR6eSNWLoAV4IpOLZ
WgA=
=in72
-----END PGP SIGNATURE-----
Merge tag 'dma-mapping-4.21' of git://git.infradead.org/users/hch/dma-mapping
Pull DMA mapping updates from Christoph Hellwig:
"A huge update this time, but a lot of that is just consolidating or
removing code:
- provide a common DMA_MAPPING_ERROR definition and avoid indirect
calls for dma_map_* error checking
- use direct calls for the DMA direct mapping case, avoiding huge
retpoline overhead for high performance workloads
- merge the swiotlb dma_map_ops into dma-direct
- provide a generic remapping DMA consistent allocator for
architectures that have devices that perform DMA that is not cache
coherent. Based on the existing arm64 implementation and also used
for csky now.
- improve the dma-debug infrastructure, including dynamic allocation
of entries (Robin Murphy)
- default to providing chaining scatterlist everywhere, with opt-outs
for the few architectures (alpha, parisc, most arm32 variants) that
can't cope with it
- misc sparc32 dma-related cleanups
- remove the dma_mark_clean arch hook used by swiotlb on ia64 and
replace it with the generic noncoherent infrastructure
- fix the return type of dma_set_max_seg_size (Niklas Söderlund)
- move the dummy dma ops for not DMA capable devices from arm64 to
common code (Robin Murphy)
- ensure dma_alloc_coherent returns zeroed memory to avoid kernel
data leaks through userspace. We already did this for most common
architectures, but this ensures we do it everywhere.
dma_zalloc_coherent has been deprecated and can hopefully be
removed after -rc1 with a coccinelle script"
* tag 'dma-mapping-4.21' of git://git.infradead.org/users/hch/dma-mapping: (73 commits)
dma-mapping: fix inverted logic in dma_supported
dma-mapping: deprecate dma_zalloc_coherent
dma-mapping: zero memory returned from dma_alloc_*
sparc/iommu: fix ->map_sg return value
sparc/io-unit: fix ->map_sg return value
arm64: default to the direct mapping in get_arch_dma_ops
PCI: Remove unused attr variable in pci_dma_configure
ia64: only select ARCH_HAS_DMA_COHERENT_TO_PFN if swiotlb is enabled
dma-mapping: bypass indirect calls for dma-direct
vmd: use the proper dma_* APIs instead of direct methods calls
dma-direct: merge swiotlb_dma_ops into the dma_direct code
dma-direct: use dma_direct_map_page to implement dma_direct_map_sg
dma-direct: improve addressability error reporting
swiotlb: remove dma_mark_clean
swiotlb: remove SWIOTLB_MAP_ERROR
ACPI / scan: Refactor _CCA enforcement
dma-mapping: factor out dummy DMA ops
dma-mapping: always build the direct mapping code
dma-mapping: move dma_cache_sync out of line
dma-mapping: move various slow path functions out of line
...