Commit Graph

1320329 Commits

Author SHA1 Message Date
Pavel Begunkov
3730aebbda io_uring: disable ENTER_EXT_ARG_REG for IOPOLL
IOPOLL doesn't use the extended arguments, no need for it to support
IORING_ENTER_EXT_ARG_REG. Let's disable it for IOPOLL, if anything it
leaves more space for future extensions.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/a35ecd919dbdc17bd5b7932273e317832c531b45.1731689588.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2024-11-15 09:58:34 -07:00
Pavel Begunkov
68685fa20e io_uring: fortify io_pin_pages with a warning
We're a bit too frivolous with types of nr_pages arguments, converting
it to long and back to int, passing an unsigned int pointer as an int
pointer and so on. Shouldn't cause any problem but should be carefully
reviewed, but until then let's add a WARN_ON_ONCE check to be more
confident callers don't pass poorely checked arguents.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/d48e0c097cbd90fb47acaddb6c247596510d8cfc.1731689588.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2024-11-15 09:58:34 -07:00
Al Viro
56cec28dc4 switch io_msg_ring() to CLASS(fd)
Use CLASS(fd) to get the file for sync message ring requests, rather
than open-code the file retrieval dance.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Link: https://lore.kernel.org/r/20241115034902.GP3387508@ZenIV
[axboe: make a more coherent commit message]
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2024-11-15 09:55:54 -07:00
Wangyang Guo
85f0d8e39a workqueue: Reduce expensive locks for unbound workqueue
For unbound workqueue, pwqs usually map to just a few pools. Most of
the time, pwqs will be linked sequentially to wq->pwqs list by cpu
index.  Usually, consecutive CPUs have the same workqueue attribute
(e.g. belong to the same NUMA node). This makes pwqs with the same
pool cluster together in the pwq list.

Only do lock/unlock if the pool has changed in flush_workqueue_prep_pwqs().
This reduces the number of expensive lock operations.

The performance data shows this change boosts FIO by 65x in some cases
when multiple concurrent threads write to xfs mount points with fsync.

FIO Benchmark Details
- FIO version: v3.35
- FIO Options: ioengine=libaio,iodepth=64,norandommap=1,rw=write,
  size=128M,bs=4k,fsync=1
- FIO Job Configs: 64 jobs in total writing to 4 mount points (ramdisks
  formatted as xfs file system).
- Kernel Codebase: v6.12-rc5
- Test Platform: Xeon 8380 (2 sockets)

Reviewed-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Wangyang Guo <wangyang.guo@intel.com>
Reviewed-by: Lai Jiangshan <jiangshanlai@gmail.com>
Signed-off-by: Tejun Heo <tj@kernel.org>
2024-11-15 06:43:39 -10:00
Ard Biesheuvel
e6384c3984 efi/libstub: Parse builtin command line after bootloader provided one
When CONFIG_CMDLINE_EXTEND is set, the core kernel command line handling
logic appends CONFIG_CMDLINE to the bootloader provided command line.
The EFI stub does the opposite, and parses the builtin one first.

The usual behavior of command line options is that the last one takes
precedence if it appears multiple times, unless there is a meaningful
way to combine them. In either case, parsing the builtin command line
first while the core kernel does it in the opposite order is likely to
produce inconsistent results in such cases.

Therefore, switch the order in the stub to match the core kernel.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2024-11-15 17:40:10 +01:00
Nicolas Saenz Julienne
21b1a7f7ae x86/efi: Apply EFI Memory Attributes after kexec
Kexec bypasses EFI's switch to virtual mode. In exchange, it has its own
routine, kexec_enter_virtual_mode(), which replays the mappings made by
the original kernel. Unfortunately, that function fails to reinstate
EFI's memory attributes, which would've otherwise been set after
entering virtual mode. Remediate this by calling
efi_runtime_update_mappings() within kexec's routine.

Signed-off-by: Nicolas Saenz Julienne <nsaenz@amazon.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2024-11-15 17:40:00 +01:00
Nicolas Saenz Julienne
7eb4e1dd71 x86/efi: Drop support for the EFI_PROPERTIES_TABLE
Drop support for the EFI_PROPERTIES_TABLE. It was a failed, short-lived
experiment that broke the boot both on Linux and Windows, and was
replaced by the EFI_MEMORY_ATTRIBUTES_TABLE shortly after.

Suggested-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Nicolas Saenz Julienne <nsaenz@amazon.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2024-11-15 17:40:00 +01:00
Yonghong Song
4ff04abf9d bpf: Add necessary migrate_disable to range_tree.
When running bpf selftest (./test_progs -j), the following warnings
showed up:

  $ ./test_progs -t arena_atomics
  ...
  BUG: using smp_processor_id() in preemptible [00000000] code: kworker/u19:0/12501
  caller is bpf_mem_free+0x128/0x330
  ...
  Call Trace:
   <TASK>
   dump_stack_lvl
   check_preemption_disabled
   bpf_mem_free
   range_tree_destroy
   arena_map_free
   bpf_map_free_deferred
   process_scheduled_works
   ...

For selftests arena_htab and arena_list, similar smp_process_id() BUGs are
dumped, and the following are two stack trace:

   <TASK>
   dump_stack_lvl
   check_preemption_disabled
   bpf_mem_alloc
   range_tree_set
   arena_map_alloc
   map_create
   ...

   <TASK>
   dump_stack_lvl
   check_preemption_disabled
   bpf_mem_alloc
   range_tree_clear
   arena_vm_fault
   do_pte_missing
   handle_mm_fault
   do_user_addr_fault
   ...

Add migrate_{disable,enable}() around related bpf_mem_{alloc,free}()
calls to fix the issue.

Fixes: b795379757 ("bpf: Introduce range_tree data structure and use it in bpf arena")
Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
Link: https://lore.kernel.org/r/20241115060354.2832495-1-yonghong.song@linux.dev
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-11-15 08:11:53 -08:00
Viktor Malik
ab4dc30c53 bpf: Do not alloc arena on unsupported arches
Do not allocate BPF arena on arches that do not support it, instead
return EOPNOTSUPP. This is useful to prevent bugs such as soft lockups
while trying to free the arena which we have witnessed on ppc64le [1].

[1] https://lore.kernel.org/bpf/4afdcb50-13f2-4772-8db1-3fd02bd985b3@redhat.com/

Signed-off-by: Viktor Malik <vmalik@redhat.com>
Link: https://lore.kernel.org/r/20241115082548.74972-1-vmalik@redhat.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-11-15 08:10:13 -08:00
Arnd Bergmann
b77587ac51 FSL SOC changes for 6.13:
- Fix a missing of_node_put() in RCPM
 - Fix a missing error code on failure in CPM1 QMC
 - Switch to using for_each_available_child_of_node_scoped() in CPM1 TSA
 -----BEGIN PGP SIGNATURE-----
 
 iJIEABYKADoWIQQQ/+b4s5DeF6zCYyNoqS/rAbjdeAUCZzYBFRwcY2hyaXN0b3Bo
 ZS5sZXJveUBjc2dyb3VwLmV1AAoJEGipL+sBuN14vbcBALObPpE0CcTKWee4PMKu
 iEMnznKztaety2HsU8EVmYVPAP9nYGA/3EkGUMiEwh2+Kc/0/pIIIXRveCt2xWq4
 fBOACQ==
 =WD3l
 -----END PGP SIGNATURE-----
gpgsig -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEiK/NIGsWEZVxh/FrYKtH/8kJUicFAmc3WIQACgkQYKtH/8kJ
 UicDwg//c8rH7NJoAI1oSwwJMk7HBai2rNxvZSldfsOskVa7C2CRie7oSVhZIcE3
 X0JeEIu5byqZl60wqNalDPdSKUwWr+tNybU6iznViyf7dDRpPzy/gGd4TmlKj9Fh
 ZvrxtT4atMgTHeHkvZm7ev0xk+zKF/7ffsiyMrdX/BMbrHmaQVLVEyAcS1eyDCNY
 iwpWmv1WdlU352TleM1sd0oTHQ9D9d/7xOFnlZKddfOi7LYZBWok6BVvoolE1j3l
 CjJwtD0ILsGvoYKM+Fdh55qz3spPWyLE7VgWUyEnhuCNA/c19fhyS3yvZbwEVyvW
 EDlnIHQBgthVsOW6I7coBUqCXPwk9fddmbMijkyyW4YCvGwAHExkyQTL3inwY/wK
 x/X9v8//DnjN1T/M+UdkXz68lBeMfPvcuWepI1mDebWdSRCmcG11wZIM8qszOLoe
 8VqRTcNZ/FaHmHhwWSy7U8fWbfdg55NirhT2bdgnuiuTDzuVofNCsKzSN5MzwZFi
 SCeypvwFstfWfTE3yh0l1p2iJoxZ8nXsda/p9VT4JeHIwppgyUJ5s8wzhOlffmiS
 OFQimIkcXaqNtQKZmzXUXRI3ZQHk62Q/isWy14ZMVVr99SdFxr5Src4VpFc9fjxb
 JJz4Bh/+5NqHcFbXJ3l5zIc/ss6Yj/Q58pGVa+aYX1+Aumhjh4I=
 =6Rkj
 -----END PGP SIGNATURE-----

Merge tag 'soc_fsl-6.13-1' of https://github.com/chleroy/linux into soc/drivers

FSL SOC changes for 6.13:

- Fix a missing of_node_put() in RCPM
- Fix a missing error code on failure in CPM1 QMC
- Switch to using for_each_available_child_of_node_scoped() in CPM1 TSA

* tag 'soc_fsl-6.13-1' of https://github.com/chleroy/linux:
  soc: fsl: cpm1: qmc: Set the ret error code on platform_get_irq() failure
  soc: fsl: rcpm: fix missing of_node_put() in copy_ippdexpcr1_setting()
  soc: fsl: cpm1: tsa: switch to for_each_available_child_of_node_scoped()

Link: https://lore.kernel.org/r/c3c4961b-fe2a-4fcc-a7a1-f8b5352e09a2@csgroup.eu
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2024-11-15 15:19:47 +01:00
Jens Axboe
957860cbc1 block: make struct rq_list available for !CONFIG_BLOCK
A previous commit changed how requests are linked in the plug structure,
but unlike the previous method, it uses a new type for it rather than
struct request. The latter is available even for !CONFIG_BLOCK, while
struct rq_list is now. Move it outside CONFIG_BLOCK.

Reported-by: Nathan Chancellor <nathan@kernel.org>
Fixes: a3396b9999 ("block: add a rq_list type")
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2024-11-15 07:14:03 -07:00
Jonas Karlman
82ff5abc2e
ASoC: hdmi-codec: reorder channel allocation list
The ordering in hdmi_codec_get_ch_alloc_table_idx() results in
wrong channel allocation for a number of cases, e.g. when ELD
reports FL|FR|LFE|FC|RL|RR or FL|FR|LFE|FC|RL|RR|RC|RLC|RRC:

ca_id 0x01 with speaker mask FL|FR|LFE is selected instead of
ca_id 0x03 with speaker mask FL|FR|LFE|FC for 4 channels

and

ca_id 0x04 with speaker mask FL|FR|RC gets selected instead of
ca_id 0x0b with speaker mask FL|FR|LFE|FC|RL|RR for 6 channels

Fix this by reordering the channel allocation list with most
specific speaker masks at the top.

Signed-off-by: Jonas Karlman <jonas@kwiboo.se>
Signed-off-by: Christian Hewitt <christianshewitt@gmail.com>
Link: https://patch.msgid.link/20241115044344.3510979-1-christianshewitt@gmail.com
Signed-off-by: Mark Brown <broonie@kernel.org>
2024-11-15 13:43:00 +00:00
zhang jiao
d303e3dd8d tools/thermal: Fix common realloc mistake
If the 'realloc' fails, the thermal zones pointer is set to NULL. This
makes all thermal zones references which were previously successfully
initialized to be lost.

[dlezcano] : Fixed indentation

Signed-off-by: zhang jiao <zhangjiao2@cmss.chinamobile.com>
Link: https://lore.kernel.org/r/20241114084039.42149-1-zhangjiao2@cmss.chinamobile.com
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
2024-11-15 14:29:03 +01:00
Karol Przybylski
4223414efe crypto: marvell/cesa - fix uninit value for struct mv_cesa_op_ctx
In cesa/cipher.c most declarations of struct mv_cesa_op_ctx are uninitialized.
This causes one of the values in the struct to be left unitialized in later
usages.

This patch fixes it by adding initializations in the same way it is done in
cesa/hash.c.

Fixes errors discovered in coverity: 1600942, 1600939, 1600935, 1600934, 1600929, 1600927,
1600925, 1600921, 1600920, 1600919, 1600915, 1600914

Signed-off-by: Karol Przybylski <karprzy7@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-11-15 19:52:51 +08:00
Christophe JAILLET
572b7cf084 crypto: cavium - Fix an error handling path in cpt_ucode_load_fw()
If do_cpt_init() fails, a previous dma_alloc_coherent() call needs to be
undone.

Add the needed dma_free_coherent() before returning.

Fixes: 9e2c7d9994 ("crypto: cavium - Add Support for Octeon-tx CPT Engine")
Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-11-15 19:52:51 +08:00
Herbert Xu
dccd55892b crypto: aesni - Move back to module_init
This patch reverts commit 0fbafd06bd
("crypto: aesni - fix failing setkey for rfc4106-gcm-aesni") by
moving the aesni init function back to module_init from late_initcall.

The original patch was needed because tests were synchronous.  This
is no longer the case so there is no need to postpone the registration.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-11-15 19:52:51 +08:00
Herbert Xu
0594ad6184 crypto: lib/mpi - Export mpi_set_bit
This function is part of the exposed API and should be exported.
Otherwise a modular user would fail to build, e.g., crypto/rsa.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-11-15 19:52:51 +08:00
Michal Suchanek
3574a5168f crypto: aes-gcm-p10 - Use the correct bit to test for P10
A hwcap feature bit is passed to cpu_has_feature, resulting in testing
for CPU_FTR_MMCRA instead of the 3.1 platform revision.

Fixes: c954b252de ("crypto: powerpc/p10-aes-gcm - Register modules as SIMD")
Reported-by: Nicolai Stange <nstange@suse.com>
Signed-off-by: Michal Suchanek <msuchanek@suse.de>
Acked-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc)
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-11-15 19:52:51 +08:00
Lukas Bulwahn
5465951e3f hwrng: amd - remove reference to removed PPC_MAPLE config
Commit 62f8f307c8 ("powerpc/64: Remove maple platform") removes the
PPC_MAPLE config as a consequence of the platform’s removal.

The config definition of HW_RANDOM_AMD refers to this removed config option
in its dependencies.

Remove the reference to the removed config option.

Signed-off-by: Lukas Bulwahn <lukas.bulwahn@redhat.com>
Acked-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc)
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-11-15 19:52:51 +08:00
Ard Biesheuvel
e7c1d1c9b2 crypto: arm/crct10dif - Implement plain NEON variant
The CRC-T10DIF algorithm produces a 16-bit CRC, and this is reflected in
the folding coefficients, which are also only 16 bits wide.

This means that the polynomial multiplications involving these
coefficients can be performed using 8-bit long polynomial multiplication
(8x8 -> 16) in only a few steps, and this is an instruction that is part
of the base NEON ISA, which is all most real ARMv7 cores implement. (The
64-bit PMULL instruction is part of the crypto extensions, which are
only implemented by 64-bit cores)

The final reduction is a bit more involved, but we can delegate that to
the generic CRC-T10DIF implementation after folding the entire input
into a 16 byte vector.

This results in a speedup of around 6.6x on Cortex-A72 running in 32-bit
mode. On Cortex-A8 (BeagleBone White), the results are substantially
better than that, but not sufficiently reproducible (with tcrypt) to
quote a number here.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-11-15 19:52:51 +08:00
Ard Biesheuvel
802d8d110c crypto: arm/crct10dif - Macroify PMULL asm code
To allow an alternative version to be created of the PMULL based
CRC-T10DIF algorithm, turn the bulk of it into a macro, except for the
final reduction, which will only be used by the existing version.

Reviewed-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-11-15 19:52:51 +08:00
Ard Biesheuvel
fcf27785ae crypto: arm/crct10dif - Use existing mov_l macro instead of __adrl
Reviewed-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-11-15 19:52:51 +08:00
Ard Biesheuvel
779cee8209 crypto: arm64/crct10dif - Remove remaining 64x64 PMULL fallback code
The only remaining user of the fallback implementation of 64x64
polynomial multiplication using 8x8 PMULL instructions is the final
reduction from a 16 byte vector to a 16-bit CRC.

The fallback code is complicated and messy, and this reduction has
little impact on the overall performance, so instead, let's calculate
the final CRC by passing the 16 byte vector to the generic CRC-T10DIF
implementation when running the fallback version.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-11-15 19:52:51 +08:00
Ard Biesheuvel
67dfb1b73f crypto: arm64/crct10dif - Use faster 16x64 bit polynomial multiply
The CRC-T10DIF implementation for arm64 has a version that uses 8x8
polynomial multiplication, for cores that lack the crypto extensions,
which cover the 64x64 polynomial multiplication instruction that the
algorithm was built around.

This fallback version rather naively adopted the 64x64 polynomial
multiplication algorithm that I ported from ARM for the GHASH driver,
which needs 8 PMULL8 instructions to implement one PMULL64. This is
reasonable, given that each 8-bit vector element needs to be multiplied
with each element in the other vector, producing 8 vectors with partial
results that need to be combined to yield the correct result.

However, most PMULL64 invocations in the CRC-T10DIF code involve
multiplication by a pair of 16-bit folding coefficients, and so all the
partial results from higher order bytes will be zero, and there is no
need to calculate them to begin with.

Then, the CRC-T10DIF algorithm always XORs the output values of the
PMULL64 instructions being issued in pairs, and so there is no need to
faithfully implement each individual PMULL64 instruction, as long as
XORing the results pairwise produces the expected result.

Implementing these improvements results in a speedup of 3.3x on low-end
platforms such as Raspberry Pi 4 (Cortex-A72)

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-11-15 19:52:51 +08:00
Ard Biesheuvel
7048c21e6b crypto: arm64/crct10dif - Remove obsolete chunking logic
This is a partial revert of commit fc754c024a, which moved the logic
into C code which ensures that kernel mode NEON code does not hog the
CPU for too long.

This is no longer needed now that kernel mode NEON no longer disables
preemption, so we can drop this.

Reviewed-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-11-15 19:52:51 +08:00
Chen Ridong
19630cf572 crypto: bcm - add error check in the ahash_hmac_init function
The ahash_init functions may return fails. The ahash_hmac_init should
not return ok when ahash_init returns error. For an example, ahash_init
will return -ENOMEM when allocation memory is error.

Fixes: 9d12ba86f8 ("crypto: brcm - Add Broadcom SPU driver")
Signed-off-by: Chen Ridong <chenridong@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-11-15 19:52:51 +08:00
Chen Ridong
b64140c74e crypto: caam - add error check to caam_rsa_set_priv_key_form
The caam_rsa_set_priv_key_form did not check for memory allocation errors.
Add the checks to the caam_rsa_set_priv_key_form functions.

Fixes: 52e26d77b8 ("crypto: caam - add support for RSA key form 2")
Signed-off-by: Chen Ridong <chenridong@huawei.com>
Reviewed-by: Gaurav Jain <gaurav.jain@nxp.com>
Reviewed-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-11-15 19:52:51 +08:00
Jeremy Sowden
b0ccf4f53d netfilter: bitwise: add support for doing AND, OR and XOR directly
Hitherto, these operations have been converted in user space to
mask-and-xor operations on one register and two immediate values, and it
is the latter which have been evaluated by the kernel.  We add support
for evaluating these operations directly in kernel space on one register
and either an immediate value or a second register.

Pablo made a few changes to the original patch:

- EINVAL if NFTA_BITWISE_SREG2 is used with fast version.
- Allow _AND,_OR,_XOR with _DATA != sizeof(u32)
- Dump _SREG2 or _DATA with _AND,_OR,_XOR

Signed-off-by: Jeremy Sowden <jeremy@azazel.net>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2024-11-15 12:07:04 +01:00
Ard Biesheuvel
8fbe4c49c0 efi/memattr: Ignore table if the size is clearly bogus
There are reports [0] of cases where a corrupt EFI Memory Attributes
Table leads to out of memory issues at boot because the descriptor size
and entry count in the table header are still used to reserve the entire
table in memory, even though the resulting region is gigabytes in size.

Given that the EFI Memory Attributes Table is supposed to carry up to 3
entries for each EfiRuntimeServicesCode region in the EFI memory map,
and given that there is no reason for the descriptor size used in the
table to exceed the one used in the EFI memory map, 3x the size of the
entire EFI memory map is a reasonable upper bound for the size of this
table. This means that sizes exceeding that are highly likely to be
based on corrupted data, and the table should just be ignored instead.

[0] https://bugzilla.suse.com/show_bug.cgi?id=1231465

Cc: Gregory Price <gourry@gourry.net>
Cc: Usama Arif <usamaarif642@gmail.com>
Acked-by: Jiri Slaby <jirislaby@kernel.org>
Acked-by: Breno Leitao <leitao@debian.org>
Link: https://lore.kernel.org/all/20240912155159.1951792-2-ardb+git@google.com/
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2024-11-15 12:03:29 +01:00
Jeremy Sowden
a12143e608 netfilter: bitwise: rename some boolean operation functions
In the next patch we add support for doing AND, OR and XOR operations
directly in the kernel, so rename some functions and an enum constant
related to mask-and-xor boolean operations.

Signed-off-by: Jeremy Sowden <jeremy@azazel.net>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2024-11-15 11:00:29 +01:00
Guillaume Nault
f0d839c13e netfilter: nf_dup4: Convert nf_dup_ipv4_route() to dscp_t.
Use ip4h_dscp() instead of reading iph->tos directly.

ip4h_dscp() returns a dscp_t value which is temporarily converted back
to __u8 with inet_dscp_to_dsfield(). When converting ->flowi4_tos to
dscp_t in the future, we'll only have to remove that
inet_dscp_to_dsfield() call.

Signed-off-by: Guillaume Nault <gnault@redhat.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2024-11-15 11:00:29 +01:00
Guillaume Nault
f12b67cc7d netfilter: nft_fib: Convert nft_fib4_eval() to dscp_t.
Use ip4h_dscp() instead of reading iph->tos directly.

ip4h_dscp() returns a dscp_t value which is temporarily converted back
to __u8 with inet_dscp_to_dsfield(). When converting ->flowi4_tos to
dscp_t in the future, we'll only have to remove that
inet_dscp_to_dsfield() call.

Signed-off-by: Guillaume Nault <gnault@redhat.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2024-11-15 11:00:29 +01:00
Guillaume Nault
f694ce6de5 netfilter: rpfilter: Convert rpfilter_mt() to dscp_t.
Use ip4h_dscp() instead of reading iph->tos directly.

ip4h_dscp() returns a dscp_t value which is temporarily converted back
to __u8 with inet_dscp_to_dsfield(). When converting ->flowi4_tos to
dscp_t in the future, we'll only have to remove that
inet_dscp_to_dsfield() call.

Signed-off-by: Guillaume Nault <gnault@redhat.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2024-11-15 11:00:29 +01:00
Guillaume Nault
6f9615a6e6 netfilter: flow_offload: Convert nft_flow_route() to dscp_t.
Use ip4h_dscp()instead of reading ip_hdr()->tos directly.

ip4h_dscp() returns a dscp_t value which is temporarily converted back
to __u8 with inet_dscp_to_dsfield(). When converting ->flowi4_tos to
dscp_t in the future, we'll only have to remove that
inet_dscp_to_dsfield() call.

Also, remove the comment about the net/ip.h include file, since it's
now required for the ip4h_dscp() helper too.

Signed-off-by: Guillaume Nault <gnault@redhat.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2024-11-15 11:00:29 +01:00
Guillaume Nault
0608746f95 netfilter: ipv4: Convert ip_route_me_harder() to dscp_t.
Use ip4h_dscp()instead of reading iph->tos directly.

ip4h_dscp() returns a dscp_t value which is temporarily converted back
to __u8 with inet_dscp_to_dsfield(). When converting ->flowi4_tos to
dscp_t in the future, we'll only have to remove that
inet_dscp_to_dsfield() call.

Signed-off-by: Guillaume Nault <gnault@redhat.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2024-11-15 11:00:29 +01:00
Ard Biesheuvel
6fce6e9791 efi/zboot: Fix outdated comment about using LoadImage/StartImage
EFI zboot no longer uses LoadImage/StartImage, but subsumes the arch
code to load and start the bare metal image directly. Fix the Kconfig
description accordingly.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2024-11-15 10:40:51 +01:00
Ard Biesheuvel
06d39d79cb efi/libstub: Free correct pointer on failure
cmdline_ptr is an out parameter, which is not allocated by the function
itself, and likely points into the caller's stack.

cmdline refers to the pool allocation that should be freed when cleaning
up after a failure, so pass this instead to free_pool().

Fixes: 42c8ea3dca ("efi: libstub: Factor out EFI stub entrypoint ...")
Cc: <stable@vger.kernel.org>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2024-11-15 10:40:51 +01:00
Thorsten Blum
eb01f8f3c4 microblaze: mb: Use str_yes_no() helper in show_cpuinfo()
Remove hard-coded strings by using the str_yes_no() helper function.

Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev>
Link: https://lore.kernel.org/r/20241114224649.57946-4-thorsten.blum@linux.dev
Signed-off-by: Michal Simek <michal.simek@amd.com>
2024-11-15 10:27:48 +01:00
Joerg Roedel
42f0cbb2a2 Merge branches 'intel/vt-d', 'amd/amd-vi' and 'iommufd/arm-smmuv3-nested' into next 2024-11-15 09:27:43 +01:00
Joerg Roedel
ae3325f752 Merge branches 'arm/smmu', 'mediatek', 's390', 'ti/omap', 'riscv' and 'core' into next 2024-11-15 09:27:02 +01:00
Joerg Roedel
9af48bbbae Arm SMMU updates for 6.13
- SMMUv2:
   * Return -EPROBE_DEFER for client devices probing before their SMMU.
   * Devicetree binding updates for Qualcomm MMU-500 implementations.
 
 - SMMUv3:
   * Minor fixes and cleanup for NVIDIA's virtual command queue driver.
 
 - IO-PGTable:
   * Fix indexing of concatenated PGDs and extend selftest coverage.
   * Remove unused block-splitting support.
 -----BEGIN PGP SIGNATURE-----
 
 iQFEBAABCgAuFiEEPxTL6PPUbjXGY88ct6xw3ITBYzQFAmc2IykQHHdpbGxAa2Vy
 bmVsLm9yZwAKCRC3rHDchMFjNLTJB/9M/B74hU548mgB4cIjZ40mR+MNivw6ipnZ
 UWXCqCvyLrE3qw1yrQ/79P6XoaZYJb949jEcnPQHFRwGlPV3nY5jfO8wVEnr9boy
 GNIo8OuobSR2nd/zeHnJrVdvFxGqI9+/Yct9xaXpYKO6rFDKp5manr1CbUWJK0qG
 5A72JHnf7wJrzab0rSfRtUsm6EAjg/lXQj4G3OmdBVHckxluCRhqRn45BeEMz/56
 Z5NbOmIdfR206UFUWDXVmqOgSjpDVgyOUNHz/ciog7dWdT6kkRKNFWvjV3x37pk8
 PQUz1FbkTahT6Lhhndgmga10pj1vlLSP6D9quazbGeZpiapxTFE7
 =yRij
 -----END PGP SIGNATURE-----

Merge tag 'arm-smmu-updates' of git://git.kernel.org/pub/scm/linux/kernel/git/will/linux into arm/smmu

Arm SMMU updates for 6.13

- SMMUv2:
  * Return -EPROBE_DEFER for client devices probing before their SMMU.
  * Devicetree binding updates for Qualcomm MMU-500 implementations.

- SMMUv3:
  * Minor fixes and cleanup for NVIDIA's virtual command queue driver.

- IO-PGTable:
  * Fix indexing of concatenated PGDs and extend selftest coverage.
  * Remove unused block-splitting support.
2024-11-15 09:25:00 +01:00
Amir Goldstein
d66907b51b ovl: convert ovl_real_fdget() callers to ovl_real_file()
Stop using struct fd to return a real file from ovl_real_fdget(),
because we no longer return a temporary file object and the callers
always get a borrowed file reference.

Rename the helper to ovl_real_file(), return a borrowed reference of
the real file that is referenced from the overlayfs file or an error.

Signed-off-by: Amir Goldstein <amir73il@gmail.com>
2024-11-15 08:56:49 +01:00
Amir Goldstein
4333e42ed4 ovl: convert ovl_real_fdget_path() callers to ovl_real_file_path()
Stop using struct fd to return a real file from ovl_real_fdget_path(),
because we no longer return a temporary file object and the callers
always get a borrowed file reference.

Rename the helper to ovl_real_file_path(), return a borrowed reference
of the real file that is referenced from the overlayfs file or an error.

Signed-off-by: Amir Goldstein <amir73il@gmail.com>
2024-11-15 08:56:48 +01:00
Amir Goldstein
18e48d0e2c ovl: store upper real file in ovl_file struct
When an overlayfs file is opened as lower and then the file is copied up,
every operation on the overlayfs open file will open a temporary backing
file to the upper dentry and close it at the end of the operation.

Store the upper real file along side the original (lower) real file in
ovl_file instead of opening a temporary upper file on every operation.

Signed-off-by: Amir Goldstein <amir73il@gmail.com>
2024-11-15 08:56:48 +01:00
Amir Goldstein
87a8a76c34 ovl: allocate a container struct ovl_file for ovl private context
Instead of using ->private_data to point at realfile directly, so
that we can add more context per ovl open file.

Signed-off-by: Amir Goldstein <amir73il@gmail.com>
2024-11-15 08:56:48 +01:00
Amir Goldstein
c2c54b5f34 ovl: do not open non-data lower file for fsync
ovl_fsync() with !datasync opens a backing file from the top most dentry
in the stack, checks if this dentry is non-upper and skips the fsync.

In case of an overlay dentry stack with lower data and lower metadata
above it, but without an upper metadata above it, the backing file is
opened from the top most lower metadata dentry and never used.

Refactor the helper ovl_real_fdget_meta() into ovl_real_fdget_path() and
open code the checks for non-upper inode in ovl_fsync(), so in that case
we can avoid the unneeded backing file open.

Signed-off-by: Amir Goldstein <amir73il@gmail.com>
2024-11-15 08:56:48 +01:00
Vinicius Costa Gomes
c5b28fc161 ovl: Optimize override/revert creds
Use override_creds_light() in ovl_override_creds() and
revert_creds_light() in ovl_revert_creds().

The _light() functions do not change the 'usage' of the credentials in
question, as they refer to the credentials associated with the
mounter, which have a longer lifetime.

In ovl_setup_cred_for_create(), do not need to modify the mounter
credentials (returned by override_creds_light()) 'usage' counter.
Add a warning to verify that we are indeed working with the mounter
credentials (stored in the superblock). Failure in this assumption
means that creds may leak.

Suggested-by: Christian Brauner <brauner@kernel.org>
Signed-off-by: Vinicius Costa Gomes <vinicius.gomes@intel.com>
Signed-off-by: Amir Goldstein <amir73il@gmail.com>
2024-11-15 08:55:39 +01:00
Ritesh Harjani (IBM)
2532e6c74a cma: enforce non-zero pageblock_order during cma_init_reserved_mem()
cma_init_reserved_mem() checks base and size alignment with
CMA_MIN_ALIGNMENT_BYTES.  However, some users might call this during early
boot when pageblock_order is 0.  That means if base and size does not have
pageblock_order alignment, it can cause functional failures during cma
activate area.

So let's enforce pageblock_order to be non-zero during
cma_init_reserved_mem() to catch such wrong usages.

1. This was seen with fadump on PowerPC which was calling
   cma_init_reserved_mem() before the pageblock_order was initialized. 
   This is now fixed in the fadump on PowerPC itself.  The details of that
   can be found in the patch including the userspace-visible effect of the
   issue [1].

2. However it was also decided that we should add a stronger
   enforcement check within cma_init_reserved_mem() to catch such wrong
   usages [2].  Hence this patch.  This is ok to be in -next and there is
   no "Fixes" tag required for this patch.

[1]: https://lore.kernel.org/all/3ae208e48c0d9cefe53d2dc4f593388067405b7d.1729146153.git.ritesh.list@gmail.com/
[2]: https://lore.kernel.org/all/83eb128e-4f06-4725-a843-a4563f246a44@redhat.com/

Link: https://lkml.kernel.org/r/e274344b44d5f80fa54c52f530387257fe99ec65.1731505681.git.ritesh.list@gmail.com
Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com>
Acked-by: David Hildenbrand <david@redhat.com>
Acked-by: Zi Yan <ziy@nvidia.com>
Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2024-11-14 22:49:19 -08:00
Nirjhar Roy
811808d365 mm/kfence: add a new kunit test test_use_after_free_read_nofault()
Faults from copy_from_kernel_nofault() need to be handled by fixup table
and should not be handled by kfence.  Otherwise while reading /proc/kcore
which uses copy_from_kernel_nofault(), kfence can generate false
negatives.  This can happen when /proc/kcore ends up reading an unmapped
address from kfence pool.

Let's add a testcase to cover this case.

Link: https://lkml.kernel.org/r/210e561f7845697a32de44b643393890f180069f.1729272697.git.ritesh.list@gmail.com
Signed-off-by: Nirjhar Roy <nirjhar@linux.ibm.com>
Co-developed-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com>
Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com>
Tested-by: Marco Elver <elver@google.com>
Reviewed-by: Marco Elver <elver@google.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2024-11-14 22:49:19 -08:00
Liu Shixin
f364cdeb38 zram: fix NULL pointer in comp_algorithm_show()
LTP reported a NULL pointer dereference as followed:

 CPU: 7 UID: 0 PID: 5995 Comm: cat Kdump: loaded Not tainted 6.12.0-rc6+ #3
 Hardware name: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015
 pstate: 40400005 (nZcv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
 pc : __pi_strcmp+0x24/0x140
 lr : zcomp_available_show+0x60/0x100 [zram]
 sp : ffff800088b93b90
 x29: ffff800088b93b90 x28: 0000000000000001 x27: 0000000000400cc0
 x26: 0000000000000ffe x25: ffff80007b3e2388 x24: 0000000000000000
 x23: ffff80007b3e2390 x22: ffff0004041a9000 x21: ffff80007b3e2900
 x20: 0000000000000000 x19: 0000000000000000 x18: 0000000000000000
 x17: 0000000000000000 x16: 0000000000000000 x15: 0000000000000000
 x14: 0000000000000000 x13: 0000000000000000 x12: 0000000000000000
 x11: 0000000000000000 x10: ffff80007b3e2900 x9 : ffff80007b3cb280
 x8 : 0101010101010101 x7 : 0000000000000000 x6 : 0000000000000000
 x5 : 0000000000000040 x4 : 0000000000000000 x3 : 00656c722d6f7a6c
 x2 : 0000000000000000 x1 : ffff80007b3e2900 x0 : 0000000000000000
 Call trace:
  __pi_strcmp+0x24/0x140
  comp_algorithm_show+0x40/0x70 [zram]
  dev_attr_show+0x28/0x80
  sysfs_kf_seq_show+0x90/0x140
  kernfs_seq_show+0x34/0x48
  seq_read_iter+0x1d4/0x4e8
  kernfs_fop_read_iter+0x40/0x58
  new_sync_read+0x9c/0x168
  vfs_read+0x1a8/0x1f8
  ksys_read+0x74/0x108
  __arm64_sys_read+0x24/0x38
  invoke_syscall+0x50/0x120
  el0_svc_common.constprop.0+0xc8/0xf0
  do_el0_svc+0x24/0x38
  el0_svc+0x38/0x138
  el0t_64_sync_handler+0xc0/0xc8
  el0t_64_sync+0x188/0x190

The zram->comp_algs[ZRAM_PRIMARY_COMP] can be NULL in zram_add() if
comp_algorithm_set() has not been called.  User can access the zram device
by sysfs after device_add_disk(), so there is a time window to trigger the
NULL pointer dereference.  Move it ahead device_add_disk() to make sure
when user can access the zram device, it is ready.  comp_algorithm_set()
is protected by zram->init_lock in other places and no such problem.

Link: https://lkml.kernel.org/r/20241108100147.3776123-1-liushixin2@huawei.com
Fixes: 7ac07a26de ("zram: preparation for multi-zcomp support")
Signed-off-by: Liu Shixin <liushixin2@huawei.com>
Reviewed-by: Sergey Senozhatsky <senozhatsky@chromium.org>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: Minchan Kim <minchan@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2024-11-14 22:49:19 -08:00