Commit Graph

1283090 Commits

Author SHA1 Message Date
Alexei Starovoitov
0c237341d9 Merge branch 'fixes-for-bpf-timer-lockup-and-uaf'
Kumar Kartikeya Dwivedi says:

====================
Fixes for BPF timer lockup and UAF

The following patches contain fixes for timer lockups and a
use-after-free scenario.

This set proposes to fix the following lockup situation for BPF timers.

CPU 1					CPU 2

bpf_timer_cb				bpf_timer_cb
  timer_cb1				  timer_cb2
    bpf_timer_cancel(timer_cb2)		    bpf_timer_cancel(timer_cb1)
      hrtimer_cancel			      hrtimer_cancel

In this case, both callbacks will continue waiting for each other to
finish synchronously, causing a lockup.

The proposed fix adds support for tracking in-flight cancellations
*begun by other timer callbacks* for a particular BPF timer.  Whenever
preparing to call hrtimer_cancel, a callback will increment the target
timer's counter, then inspect its in-flight cancellations, and if
non-zero, return -EDEADLK to avoid situations where the target timer's
callback is waiting for its completion.

This does mean that in cases where a callback is fired and cancelled, it
will be unable to cancel any timers in that execution. This can be
alleviated by maintaining the list of waiting callbacks in bpf_hrtimer
and searching through it to avoid interdependencies, but this may
introduce additional delays in bpf_timer_cancel, in addition to
requiring extra state at runtime which may need to be allocated or
reused from bpf_hrtimer storage. Moreover, extra synchronization is
needed to delete these elements from the list of waiting callbacks once
hrtimer_cancel has finished.

The second patch is for a deadlock situation similar to above in
bpf_timer_cancel_and_free, but also a UAF scenario that can occur if
timer is armed before entering it, if hrtimer_running check causes the
hrtimer_cancel call to be skipped.

As seen above, synchronous hrtimer_cancel would lead to deadlock (if
same callback tries to free its timer, or two timers free each other),
therefore we queue work onto the global workqueue to ensure outstanding
timers are cancelled before bpf_hrtimer state is freed.

Further details are in the patches.
====================

Link: https://lore.kernel.org/r/20240709185440.1104957-1-memxor@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-07-10 16:20:16 -07:00
Kumar Kartikeya Dwivedi
a6fcd19d7e bpf: Defer work in bpf_timer_cancel_and_free
Currently, the same case as previous patch (two timer callbacks trying
to cancel each other) can be invoked through bpf_map_update_elem as
well, or more precisely, freeing map elements containing timers. Since
this relies on hrtimer_cancel as well, it is prone to the same deadlock
situation as the previous patch.

It would be sufficient to use hrtimer_try_to_cancel to fix this problem,
as the timer cannot be enqueued after async_cancel_and_free. Once
async_cancel_and_free has been done, the timer must be reinitialized
before it can be armed again. The callback running in parallel trying to
arm the timer will fail, and freeing bpf_hrtimer without waiting is
sufficient (given kfree_rcu), and bpf_timer_cb will return
HRTIMER_NORESTART, preventing the timer from being rearmed again.

However, there exists a UAF scenario where the callback arms the timer
before entering this function, such that if cancellation fails (due to
timer callback invoking this routine, or the target timer callback
running concurrently). In such a case, if the timer expiration is
significantly far in the future, the RCU grace period expiration
happening before it will free the bpf_hrtimer state and along with it
the struct hrtimer, that is enqueued.

Hence, it is clear cancellation needs to occur after
async_cancel_and_free, and yet it cannot be done inline due to deadlock
issues. We thus modify bpf_timer_cancel_and_free to defer work to the
global workqueue, adding a work_struct alongside rcu_head (both used at
_different_ points of time, so can share space).

Update existing code comments to reflect the new state of affairs.

Fixes: b00628b1c7 ("bpf: Introduce bpf timers.")
Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Link: https://lore.kernel.org/r/20240709185440.1104957-3-memxor@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-07-10 15:59:44 -07:00
Kumar Kartikeya Dwivedi
d4523831f0 bpf: Fail bpf_timer_cancel when callback is being cancelled
Given a schedule:

timer1 cb			timer2 cb

bpf_timer_cancel(timer2);	bpf_timer_cancel(timer1);

Both bpf_timer_cancel calls would wait for the other callback to finish
executing, introducing a lockup.

Add an atomic_t count named 'cancelling' in bpf_hrtimer. This keeps
track of all in-flight cancellation requests for a given BPF timer.
Whenever cancelling a BPF timer, we must check if we have outstanding
cancellation requests, and if so, we must fail the operation with an
error (-EDEADLK) since cancellation is synchronous and waits for the
callback to finish executing. This implies that we can enter a deadlock
situation involving two or more timer callbacks executing in parallel
and attempting to cancel one another.

Note that we avoid incrementing the cancelling counter for the target
timer (the one being cancelled) if bpf_timer_cancel is not invoked from
a callback, to avoid spurious errors. The whole point of detecting
cur->cancelling and returning -EDEADLK is to not enter a busy wait loop
(which may or may not lead to a lockup). This does not apply in case the
caller is in a non-callback context, the other side can continue to
cancel as it sees fit without running into errors.

Background on prior attempts:

Earlier versions of this patch used a bool 'cancelling' bit and used the
following pattern under timer->lock to publish cancellation status.

lock(t->lock);
t->cancelling = true;
mb();
if (cur->cancelling)
	return -EDEADLK;
unlock(t->lock);
hrtimer_cancel(t->timer);
t->cancelling = false;

The store outside the critical section could overwrite a parallel
requests t->cancelling assignment to true, to ensure the parallely
executing callback observes its cancellation status.

It would be necessary to clear this cancelling bit once hrtimer_cancel
is done, but lack of serialization introduced races. Another option was
explored where bpf_timer_start would clear the bit when (re)starting the
timer under timer->lock. This would ensure serialized access to the
cancelling bit, but may allow it to be cleared before in-flight
hrtimer_cancel has finished executing, such that lockups can occur
again.

Thus, we choose an atomic counter to keep track of all outstanding
cancellation requests and use it to prevent lockups in case callbacks
attempt to cancel each other while executing in parallel.

Reported-by: Dohyun Kim <dohyunkim@google.com>
Reported-by: Neel Natu <neelnatu@google.com>
Fixes: b00628b1c7 ("bpf: Introduce bpf timers.")
Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Link: https://lore.kernel.org/r/20240709185440.1104957-2-memxor@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-07-10 15:59:44 -07:00
Mohammad Shehar Yaar Tausif
af253aef18 bpf: fix order of args in call to bpf_map_kvcalloc
The original function call passed size of smap->bucket before the number of
buckets which raises the error 'calloc-transposed-args' on compilation.

Vlastimil Babka added:

The order of parameters can be traced back all the way to 6ac99e8f23
("bpf: Introduce bpf sk local storage") accross several refactorings,
and that's why the commit is used as a Fixes: tag.

In v6.10-rc1, a different commit 2c321f3f70 ("mm: change inlined
allocation helpers to account at the call site") however exposed the
order of args in a way that gcc-14 has enough visibility to start
warning about it, because (in !CONFIG_MEMCG case) bpf_map_kvcalloc is
then a macro alias for kvcalloc instead of a static inline wrapper.

To sum up the warning happens when the following conditions are all met:

- gcc-14 is used (didn't see it with gcc-13)
- commit 2c321f3f70 is present
- CONFIG_MEMCG is not enabled in .config
- CONFIG_WERROR turns this from a compiler warning to error

Fixes: 6ac99e8f23 ("bpf: Introduce bpf sk local storage")
Reviewed-by: Andrii Nakryiko <andrii@kernel.org>
Tested-by: Christian Kujau <lists@nerdbynature.de>
Signed-off-by: Mohammad Shehar Yaar Tausif <sheharyaar48@gmail.com>
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Link: https://lore.kernel.org/r/20240710100521.15061-2-vbabka@suse.cz
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-07-10 15:31:19 -07:00
Linus Torvalds
9d9a2f29ae 21 hotfixes, 15 of which are cc:stable.
No identifiable theme here - all are singleton patches, 19 are for MM.
 -----BEGIN PGP SIGNATURE-----
 
 iHUEABYKAB0WIQTTMBEPP41GrTpTJgfdBJ7gKXxAjgUCZo7tTQAKCRDdBJ7gKXxA
 jvhZAP977PnAwQH5khIS3xJxZrqx/+Tho7UPZzQPvHJPRpHorAD/TZfDazGtlPMD
 uLPEVslh18rks/w+kddLrnlBnkpUMwY=
 =vhts
 -----END PGP SIGNATURE-----

Merge tag 'mm-hotfixes-stable-2024-07-10-13-19' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm

Pull misc fixes from Andrew Morton:
 "21 hotfixes, 15 of which are cc:stable.

  No identifiable theme here - all are singleton patches, 19 are for MM"

* tag 'mm-hotfixes-stable-2024-07-10-13-19' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm: (21 commits)
  mm/hugetlb: fix kernel NULL pointer dereference when migrating hugetlb folio
  mm/hugetlb: fix potential race in __update_and_free_hugetlb_folio()
  filemap: replace pte_offset_map() with pte_offset_map_nolock()
  arch/xtensa: always_inline get_current() and current_thread_info()
  sched.h: always_inline alloc_tag_{save|restore} to fix modpost warnings
  MAINTAINERS: mailmap: update Lorenzo Stoakes's email address
  mm: fix crashes from deferred split racing folio migration
  lib/build_OID_registry: avoid non-destructive substitution for Perl < 5.13.2 compat
  mm: gup: stop abusing try_grab_folio
  nilfs2: fix kernel bug on rename operation of broken directory
  mm/hugetlb_vmemmap: fix race with speculative PFN walkers
  cachestat: do not flush stats in recency check
  mm/shmem: disable PMD-sized page cache if needed
  mm/filemap: skip to create PMD-sized page cache if needed
  mm/readahead: limit page cache size in page_cache_ra_order()
  mm/filemap: make MAX_PAGECACHE_ORDER acceptable to xarray
  mm/damon/core: merge regions aggressively when max_nr_regions is unmet
  Fix userfaultfd_api to return EINVAL as expected
  mm: vmalloc: check if a hash-index is in cpu_possible_mask
  mm: prevent derefencing NULL ptr in pfn_section_valid()
  ...
2024-07-10 14:59:41 -07:00
Linus Torvalds
ef2b7eb55e SCSI fixes on 20240710
One core change that moves a disk start message to a location where it
 will only be printed once instead of twice plus a couple of error
 handling race fixes in the ufs driver.
 
 Signed-off-by: James E.J. Bottomley <James.Bottomley@HansenPartnership.com>
 -----BEGIN PGP SIGNATURE-----
 
 iJwEABMIAEQWIQTnYEDbdso9F2cI+arnQslM7pishQUCZo7JRCYcamFtZXMuYm90
 dG9tbGV5QGhhbnNlbnBhcnRuZXJzaGlwLmNvbQAKCRDnQslM7pishSMgAPoDnYkV
 GTWdnFnoS6is3jDn/x1qtQf6Y+HjLcURWlmpcAEA0AQGyhlCMlv5xIjFiBSct/fn
 vCMDLKo+FxTjpSwWp+8=
 =I6+w
 -----END PGP SIGNATURE-----

Merge tag 'scsi-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi

Pull SCSI fixes from James Bottomley:
 "One core change that moves a disk start message to a location where it
  will only be printed once instead of twice plus a couple of error
  handling race fixes in the ufs driver"

* tag 'scsi-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi:
  scsi: sd: Do not repeat the starting disk message
  scsi: ufs: core: Fix ufshcd_abort_one racing issue
  scsi: ufs: core: Fix ufshcd_clear_cmd racing issue
2024-07-10 14:47:35 -07:00
Martin KaFai Lau
18a8a4c88f Merge branch 'BPF selftests misc fixes'
Geliang Tang says:

====================
v2:
 - only check the first "link" (link_nl) in test_mixed_links().
 - Drop patch 2 in v1.

This patchset fixes a segfault and a bpf object leak in test_progs.

It is a resend patch 1 out of "skip ENOTSUPP BPF selftests" set as Eduard
suggested. Together with another fix for xdp_adjust_tail.
====================

Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2024-07-10 12:45:08 -07:00
Geliang Tang
52b49ec1b2 selftests/bpf: Close obj in error path in xdp_adjust_tail
If bpf_object__load() fails in test_xdp_adjust_frags_tail_grow(), "obj"
opened before this should be closed. So use "goto out" to close it instead
of using "return" here.

Fixes: 110221081a ("bpf: selftests: update xdp_adjust_tail selftest to include xdp frags")
Signed-off-by: Geliang Tang <tanggeliang@kylinos.cn>
Link: https://lore.kernel.org/r/f282a1ed2d0e3fb38cceefec8e81cabb69cab260.1720615848.git.tanggeliang@kylinos.cn
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2024-07-10 12:42:50 -07:00
Geliang Tang
eef0532e90 selftests/bpf: Null checks for links in bpf_tcp_ca
Run bpf_tcp_ca selftests (./test_progs -t bpf_tcp_ca) on a Loongarch
platform, some "Segmentation fault" errors occur:

'''
 test_dctcp:PASS:bpf_dctcp__open_and_load 0 nsec
 test_dctcp:FAIL:bpf_map__attach_struct_ops unexpected error: -524
 #29/1    bpf_tcp_ca/dctcp:FAIL
 test_cubic:PASS:bpf_cubic__open_and_load 0 nsec
 test_cubic:FAIL:bpf_map__attach_struct_ops unexpected error: -524
 #29/2    bpf_tcp_ca/cubic:FAIL
 test_dctcp_fallback:PASS:dctcp_skel 0 nsec
 test_dctcp_fallback:PASS:bpf_dctcp__load 0 nsec
 test_dctcp_fallback:FAIL:dctcp link unexpected error: -524
 #29/4    bpf_tcp_ca/dctcp_fallback:FAIL
 test_write_sk_pacing:PASS:open_and_load 0 nsec
 test_write_sk_pacing:FAIL:attach_struct_ops unexpected error: -524
 #29/6    bpf_tcp_ca/write_sk_pacing:FAIL
 test_update_ca:PASS:open 0 nsec
 test_update_ca:FAIL:attach_struct_ops unexpected error: -524
 settcpca:FAIL:setsockopt unexpected setsockopt: \
					actual -1 == expected -1
 (network_helpers.c:99: errno: No such file or directory) \
					Failed to call post_socket_cb
 start_test:FAIL:start_server_str unexpected start_server_str: \
					actual -1 == expected -1
 test_update_ca:FAIL:ca1_ca1_cnt unexpected ca1_ca1_cnt: \
					actual 0 <= expected 0
 #29/9    bpf_tcp_ca/update_ca:FAIL
 #29      bpf_tcp_ca:FAIL
 Caught signal #11!
 Stack trace:
 ./test_progs(crash_handler+0x28)[0x5555567ed91c]
 linux-vdso.so.1(__vdso_rt_sigreturn+0x0)[0x7ffffee408b0]
 ./test_progs(bpf_link__update_map+0x80)[0x555556824a78]
 ./test_progs(+0x94d68)[0x5555564c4d68]
 ./test_progs(test_bpf_tcp_ca+0xe8)[0x5555564c6a88]
 ./test_progs(+0x3bde54)[0x5555567ede54]
 ./test_progs(main+0x61c)[0x5555567efd54]
 /usr/lib64/libc.so.6(+0x22208)[0x7ffff2aaa208]
 /usr/lib64/libc.so.6(__libc_start_main+0xac)[0x7ffff2aaa30c]
 ./test_progs(_start+0x48)[0x55555646bca8]
 Segmentation fault
'''

This is because BPF trampoline is not implemented on Loongarch yet,
"link" returned by bpf_map__attach_struct_ops() is NULL. test_progs
crashs when this NULL link passes to bpf_link__update_map(). This
patch adds NULL checks for all links in bpf_tcp_ca to fix these errors.
If "link" is NULL, goto the newly added label "out" to destroy the skel.

v2:
 - use "goto out" instead of "return" as Eduard suggested.

Fixes: 06da9f3bd6 ("selftests/bpf: Test switching TCP Congestion Control algorithms.")
Signed-off-by: Geliang Tang <tanggeliang@kylinos.cn>
Reviewed-by: Alan Maguire <alan.maguire@oracle.com>
Link: https://lore.kernel.org/r/b4c841492bd4ed97964e4e61e92827ce51bf1dc9.1720615848.git.tanggeliang@kylinos.cn
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2024-07-10 12:35:18 -07:00
Linus Torvalds
d6e1712b78 VFIO fix for v6.10
- Recent stable backports are exposing a bug introduced in the v6.10
    development cycle where a counter value is uninitialized.  This leads
    to regressions in userspace drivers like QEMU where where the kernel
    might ask for an arbitrary buffer size or return out of memory itself
    based on a bogus value.  Zero initialize the counter.  (Yi Liu)
 -----BEGIN PGP SIGNATURE-----
 
 iQJPBAABCAA5FiEEQvbATlQL0amee4qQI5ubbjuwiyIFAmaOwiUbHGFsZXgud2ls
 bGlhbXNvbkByZWRoYXQuY29tAAoJECObm247sIsiVnoP/iAThUwMs8fQImRUXtZ5
 jcZ3ROFZieB6CywjQEswy7G/Q9F4yNgHSwcu7VD/i44q5j88HqGDf3iZ3LPEMGSm
 GHrC9ynmJWqjN5Se7kGaKDGZFxxF9P2vTxfTkSG40qkP11obmCUIsWr4IHe4IH0r
 YcKEawW92G/mp9wEPWidDJYmRy7MZe/SJMbWaF3uwDymjqJA9WJjh2QS3tiPwAc+
 xbkdgYk9JyLe0/U0uawV0jgxvqzEM+rTw4hZRmPl7Aygi7qYx1iGnzEHd5QOGZSJ
 pHPfXe0EFIY+341y0AKwezDb4Vx8+F7M0Z+xx/v1zD875y/ffCT6lX79sDzIK/MC
 zzSzLj/64S40i8sai7Ec7t5W7PlNkXurnOjBa6k3EcfOmxYr0qcQzHgDgenUeNPL
 taybZN42RqYs3TIafRtu9vScOVpDn3H/BoSwlsdEKFqUqLA/g1B7U2EPyNRVWNWR
 By0WzJTWDnhltyTrxWJ9FfuehVlXtB91ovO5Cerh4DlOdcLkJD6RfESjaOdBRf0+
 vuqfstHYQ+7nH91n9101AKKTUQEGAVh3Lp/HgLKjI/wXua4lO+1/DmFA9mlJ/H/Y
 HcIZk3flq+Bab7TvmORwDU5UjrDofu4dvp3mhFpBByvjNKKlo0mmD5Boyj7RIi+a
 vKc3rgBpL490/V//iengWDSr
 =syRE
 -----END PGP SIGNATURE-----

Merge tag 'vfio-v6.10' of https://github.com/awilliam/linux-vfio

Pull VFIO fix from Alex Williamson:

 - Recent stable backports are exposing a bug introduced in the v6.10
   development cycle where a counter value is uninitialized.  This leads
   to regressions in userspace drivers like QEMU where where the kernel
   might ask for an arbitrary buffer size or return out of memory itself
   based on a bogus value.  Zero initialize the counter.  (Yi Liu)

* tag 'vfio-v6.10' of https://github.com/awilliam/linux-vfio:
  vfio/pci: Init the count variable in collecting hot-reset devices
2024-07-10 12:00:43 -07:00
Martin KaFai Lau
ec5b8c76ab Merge branch 'use network helpers, part 8'
Geliang Tang says:

====================
v11:
 - new patches 2, 4, 6.
 - drop expect_errno from network_helper_opts as Eduard and Martin
   suggested.
 - drop sockmap_ktls patches from this set.

v10:
 - a new patch 10 is added.
 - patches 1-6, 8-9 unchanged, only commit logs updated.
 - "err = -errno" is used in patches 7, 11, 12 to get the real error
   number before checking value of "err".

v9:
 - new patches 5-7, new struct member expect_errno for network_helper_opts.
 - patches 1-4, 8-9 unchanged.
 - update patches 10-11 to make sure all tests pass.

v8:
 - only patch 8 updated, to fix errors reported by CI.

v7:
 - address Martin's comments in v6. (thanks)
 - use MAX(opts->backlog, 0) instead of opts->backlog.
 - use connect_to_fd_opts instead connect_to_fd.
 - more ASSERT_* to check errors.

v6:
 - update patch 6 as Daniel suggested. (thanks)

v5:
 - keep make_server and make_client as Eduard suggested.

v4:
 - a new patch to use make_sockaddr in sockmap_ktls.
 - a new patch to close fd in error path in drop_on_reuseport.
 - drop make_server() in patch 7.
 - drop make_client() too in patch 9.

v3:
 - a new patch to add backlog for network_helper_opts.
 - use start_server_str in sockmap_ktls now, not start_server.

v2:
 - address Eduard's comments in v1. (thanks)
 - fix errors reported by CI.

This patch set uses network helpers in sk_lookup, and drop the local
helpers inetaddr_len() and make_socket().
====================

Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2024-07-10 11:56:22 -07:00
Geliang Tang
9004054b16 selftests/bpf: Use connect_fd_to_fd in sk_lookup
This patch uses public helper connect_fd_to_fd() exported in
network_helpers.h instead of using getsockname() + connect() in
run_lookup_prog() in prog_tests/sk_lookup.c. This can simplify
the code.

Signed-off-by: Geliang Tang <tanggeliang@kylinos.cn>
Link: https://lore.kernel.org/r/7077c277cde5a1864cdc244727162fb75c8bb9c5.1720515893.git.tanggeliang@kylinos.cn
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2024-07-10 11:56:22 -07:00
Geliang Tang
d9810c43f6 selftests/bpf: Use start_server_addr in sk_lookup
This patch uses public helper start_server_addr() in udp_recv_send()
in prog_tests/sk_lookup.c to simplify the code.

And use ASSERT_OK_FD() to check fd returned by start_server_addr().

Acked-by: Eduard Zingerman <eddyz87@gmail.com>
Signed-off-by: Geliang Tang <tanggeliang@kylinos.cn>
Link: https://lore.kernel.org/r/f11cabfef4a2170ecb66a1e8e2e72116d8f621b3.1720515893.git.tanggeliang@kylinos.cn
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2024-07-10 11:56:22 -07:00
Geliang Tang
14fc6fcd35 selftests/bpf: Use start_server_str in sk_lookup
This patch uses public helper start_server_str() to simplify make_server()
in prog_tests/sk_lookup.c.

Add a callback setsockopts() to do all sockopts, set it to post_socket_cb
pointer of struct network_helper_opts. And add a new struct cb_opts to save
the data needed to pass to the callback. Then pass this network_helper_opts
to start_server_str().

Also use ASSERT_OK_FD() to check fd returned by start_server_str().

Acked-by: Eduard Zingerman <eddyz87@gmail.com>
Signed-off-by: Geliang Tang <tanggeliang@kylinos.cn>
Link: https://lore.kernel.org/r/5981539f5591d2c4998c962ef2bf45f34c940548.1720515893.git.tanggeliang@kylinos.cn
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2024-07-10 11:56:22 -07:00
Geliang Tang
adae187ebe selftests/bpf: Close fd in error path in drop_on_reuseport
In the error path when update_lookup_map() fails in drop_on_reuseport in
prog_tests/sk_lookup.c, "server1", the fd of server 1, should be closed.
This patch fixes this by using "goto close_srv1" lable instead of "detach"
to close "server1" in this case.

Fixes: 0ab5539f85 ("selftests/bpf: Tests for BPF_SK_LOOKUP attach point")
Acked-by: Eduard Zingerman <eddyz87@gmail.com>
Signed-off-by: Geliang Tang <tanggeliang@kylinos.cn>
Link: https://lore.kernel.org/r/86aed33b4b0ea3f04497c757845cff7e8e621a2d.1720515893.git.tanggeliang@kylinos.cn
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2024-07-10 11:56:22 -07:00
Geliang Tang
7046345d48 selftests/bpf: Add ASSERT_OK_FD macro
Add a new dedicated ASSERT macro ASSERT_OK_FD to test whether a socket
FD is valid or not. It can be used to replace macros ASSERT_GT(fd, 0, ""),
ASSERT_NEQ(fd, -1, "") or statements (fd < 0), (fd != -1).

Suggested-by: Martin KaFai Lau <martin.lau@kernel.org>
Signed-off-by: Geliang Tang <tanggeliang@kylinos.cn>
Link: https://lore.kernel.org/r/ded75be86ac630a3a5099739431854c1ec33f0ea.1720515893.git.tanggeliang@kylinos.cn
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2024-07-10 11:56:22 -07:00
Geliang Tang
a3016a27ce selftests/bpf: Add backlog for network_helper_opts
Some callers expect __start_server() helper to pass their own "backlog"
value to listen() instead of the default of 1. So this patch adds struct
member "backlog" for network_helper_opts to allow callers to set "backlog"
value via start_server_str() helper.

listen(fd, 0 /* backlog */) can be used to enforce syncookie. Meaning
backlog 0 is a legit value.

Using 0 as a default and changing it to 1 here is fine. It makes the test
program easier to write for the common case. Enforcing syncookie mode by
using backlog 0 is a niche use case but it should at least have a way for
the caller to do that. Thus, -ve backlog value is used here for the
syncookie use case. Please see the comment in network_helpers.h for
the details.

Signed-off-by: Geliang Tang <tanggeliang@kylinos.cn>
Link: https://lore.kernel.org/r/1660229659b66eaad07aa2126e9c9fe217eba0dd.1720515893.git.tanggeliang@kylinos.cn
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2024-07-10 11:56:22 -07:00
Linus Torvalds
f6963ab4b0 bcachefs fixes for 6.10-rc8
- Switch some asserts to WARN()
 - Fix a few "transaction not locked" asserts in the data read retry
   paths and backpointers gc
 - Fix a race that would cause the journal to get stuck on a flush commit
 - Add missing fsck checks for the fragmentation LRU
 - The usual assorted ssorted syzbot fixes
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEKnAFLkS8Qha+jvQrE6szbY3KbnYFAmaOuRwACgkQE6szbY3K
 bnaCHhAAi9VRqws+zx3fSpe2OMwWqAEWA84QgIFJccy+I86d7dXkqG389gFqJwMG
 9S3BUHP1WooJmpsTRhK5cNtxZuKKOajXlxUYz3onsF7O/U3dHFY5GU7yIIjXS/0o
 q7+iryWAJ4MmlOrAJhgPMH/WlhbSVsjANUN0n/NhlOWHccFGHmpdMTb6aYzb+lfL
 iZOONKmEOR65gLzZYlO323OB2Tv00iEbOZAtxk68BLZYX+WON/j1T1A8gK4G0XSX
 8wcYpXNxGGkCufjBfAbXf4mcp/WygQq0Wj3bdVMFkZ+AwSJDcfGeK1H7f6tJ9e4n
 lqfWL4tgWIckS+41sA96B5cYry9TMDdhu3IeFaAm0ZrF55JT1JySGE1GNA+mo6xA
 mkMAqhG7rwYh6nSJfWX0Ie+zJ9TFbmi05ZbI7jaTuQjnJ5uvPpTuRfBDi+qSWmoi
 +IBDAi9hZgCUNEsLRGDm7RDQo0dpbFo6jpArn1RHK4MO/HkTrqcKpTqiGnfwFAU4
 PFxwq5G9+d38+M6YMX0tXdfQ+fdxroA6aIBJSsIpF18tPRBOBlQsM2GFP34uHbyk
 L6HOzed2QpM5ExBmViX79F+obuDQ/gzXQszYvDKL4QTFNbx43gPWRDrGm8EQen6y
 12EScamXbUWBSWnOqxscmeUsTdTKxLfw/F43JbE2fE7jSxc5tss=
 =VGT8
 -----END PGP SIGNATURE-----

Merge tag 'bcachefs-2024-07-10' of https://evilpiepirate.org/git/bcachefs

Pull bcachefs fixes from Kent Overstreet:

 - Switch some asserts to WARN()

 - Fix a few "transaction not locked" asserts in the data read retry
   paths and backpointers gc

 - Fix a race that would cause the journal to get stuck on a flush
   commit

 - Add missing fsck checks for the fragmentation LRU

 - The usual assorted ssorted syzbot fixes

* tag 'bcachefs-2024-07-10' of https://evilpiepirate.org/git/bcachefs: (22 commits)
  bcachefs: Add missing bch2_trans_begin()
  bcachefs: Fix missing error check in journal_entry_btree_keys_validate()
  bcachefs: Warn on attempting a move with no replicas
  bcachefs: bch2_data_update_to_text()
  bcachefs: Log mount failure error code
  bcachefs: Fix undefined behaviour in eytzinger1_first()
  bcachefs: Mark bch_inode_info as SLAB_ACCOUNT
  bcachefs: Fix bch2_inode_insert() race path for tmpfiles
  closures: fix closure_sync + closure debugging
  bcachefs: Fix journal getting stuck on a flush commit
  bcachefs: io clock: run timer fns under clock lock
  bcachefs: Repair fragmentation_lru in alloc_write_key()
  bcachefs: add check for missing fragmentation in check_alloc_to_lru_ref()
  bcachefs: bch2_btree_write_buffer_maybe_flush()
  bcachefs: Add missing printbuf_tabstops_reset() calls
  bcachefs: Fix loop restart in bch2_btree_transactions_read()
  bcachefs: Fix bch2_read_retry_nodecode()
  bcachefs: Don't use the new_fs() bucket alloc path on an initialized fs
  bcachefs: Fix shift greater than integer size
  bcachefs: Change bch2_fs_journal_stop() BUG_ON() to warning
  ...
2024-07-10 11:50:16 -07:00
Alan Maguire
eeb23b54e4 selftests/bpf: fix compilation failure when CONFIG_NF_FLOW_TABLE=m
In many cases, kernel netfilter functionality is built as modules.
If CONFIG_NF_FLOW_TABLE=m in particular, progs/xdp_flowtable.c
(and hence selftests) will fail to compile, so add a ___local
version of "struct flow_ports".

Fixes: c77e572d3a ("selftests/bpf: Add selftest for bpf_xdp_flow_lookup kfunc")
Signed-off-by: Alan Maguire <alan.maguire@oracle.com>
Link: https://lore.kernel.org/r/20240710150051.192598-1-alan.maguire@oracle.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-07-10 11:39:47 -07:00
Greg Kroah-Hartman
70c8e39440 USB-serial fixes for 6.10-rc8
Here's a fix for a long-standing issue in the mos7840 driver that can trigger
 a crash when resuming from system suspend.
 
 Included are also some new modem device ids.
 
 All have been in linux-next with no reported issues.
 -----BEGIN PGP SIGNATURE-----
 
 iHUEABYKAB0WIQQHbPq+cpGvN/peuzMLxc3C7H1lCAUCZo6w6gAKCRALxc3C7H1l
 CNPAAP0SiU/4HcMRJ/m6Q2xPiuq27Xo6yFg7kjjRiCQVObio6AEA8wjAdSiMoPhx
 p9vfS+8cvZ7z5YtxkBYPKaZsjESq+QM=
 =cW58
 -----END PGP SIGNATURE-----

Merge tag 'usb-serial-6.10-rc8' of ssh://gitolite.kernel.org/pub/scm/linux/kernel/git/johan/usb-serial into usb-linus

Johan writes:

USB-serial fixes for 6.10-rc8

Here's a fix for a long-standing issue in the mos7840 driver that can trigger
a crash when resuming from system suspend.

Included are also some new modem device ids.

All have been in linux-next with no reported issues.

* tag 'usb-serial-6.10-rc8' of ssh://gitolite.kernel.org/pub/scm/linux/kernel/git/johan/usb-serial:
  USB: serial: mos7840: fix crash on resume
  USB: serial: option: add Rolling RW350-GL variants
  USB: serial: option: add support for Foxconn T99W651
  USB: serial: option: add Netprisma LCUK54 series modules
2024-07-10 19:55:07 +02:00
Alexander Lobakin
74d1412ac8 idpf: use libeth Rx buffer management for payload buffer
idpf uses Page Pool for data buffers with hardcoded buffer lengths of
4k for "classic" buffers and 2k for "short" ones. This is not flexible
and does not ensure optimal memory usage. Why would you need 4k buffers
when the MTU is 1500?
Use libeth for the data buffers and don't hardcode any buffer sizes. Let
them be calculated from the MTU for "classics" and then divide the
truesize by 2 for "short" ones. The memory usage is now greatly reduced
and 2 buffer queues starts make sense: on frames <= 1024, you'll recycle
(and resync) a page only after 4 HW writes rather than two.

Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2024-07-10 10:48:45 -07:00
Alexander Lobakin
90912f9f4f idpf: convert header split mode to libeth + napi_build_skb()
Currently, idpf uses the following model for the header buffers:

* buffers are allocated via dma_alloc_coherent();
* when receiving, napi_alloc_skb() is called and then the header is
  copied to the newly allocated linear part.

This is far from optimal as DMA coherent zone is slow on many systems
and memcpy() neutralizes the idea and benefits of the header split. Not
speaking of that XDP can't be run on DMA coherent buffers, but at the
same time the idea of allocating an skb to run XDP program is ill.
Instead, use libeth to create page_pools for the header buffers, allocate
them dynamically and then build an skb via napi_build_skb() around them
with no memory copy. With one exception...

When you enable header split, you expect you'll always have a separate
header buffer, so that you could reserve headroom and tailroom only
there and then use full buffers for the data. For example, this is how
TCP zerocopy works -- you have to have the payload aligned to PAGE_SIZE.
The current hardware running idpf does *not* guarantee that you'll
always have headers placed separately. For example, on my setup, even
ICMP packets are written as one piece to the data buffers. You can't
build a valid skb around a data buffer in this case.
To not complicate things and not lose TCP zerocopy etc., when such thing
happens, use the empty header buffer and pull either full frame (if it's
short) or the Ethernet header there and build an skb around it. GRO
layer will pull more from the data buffer later. This W/A will hopefully
be removed one day.

Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2024-07-10 10:47:45 -07:00
Alexander Lobakin
5aaac1aece libeth: support different types of buffers for Rx
Unlike previous generations, idpf requires more buffer types for optimal
performance. This includes: header buffers, short buffers, and
no-overhead buffers (w/o headroom and tailroom, for TCP zerocopy when
the header split is enabled).
Introduce libeth Rx buffer type and calculate page_pool params
accordingly. All the HW-related details like buffer alignment are still
accounted. For the header buffers, pick 256 bytes as in most places in
the kernel (have you ever seen frames with bigger headers?).

Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com>
Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2024-07-10 10:46:32 -07:00
Alexander Lobakin
4309363f19 idpf: remove legacy Page Pool Ethtool stats
Page Pool Ethtool stats are deprecated since the Netlink Page Pool
interface introduction.
idpf receives big changes in Rx buffer management, including &page_pool
layout, so keeping these deprecated stats does only harm, not speaking
of that CONFIG_IDPF selects CONFIG_PAGE_POOL_STATS unconditionally,
while the latter is often turned off for better performance.
Remove all the references to PP stats from the Ethtool code. The stats
are still available in their full via the generic Netlink interface.

Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com>
Reviewed-by: Jacob Keller <jacob.e.keller@intel.com>
Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2024-07-10 10:44:59 -07:00
Alexander Lobakin
1b1b262085 idpf: reuse libeth's definitions of parsed ptype structures
idpf's in-kernel parsed ptype structure is almost identical to the one
used in the previous Intel drivers, which means it can be converted to
use libeth's definitions and even helpers. The only difference is that
it doesn't use a constant table (libie), rather than one obtained from
the device.
Remove the driver counterpart and use libeth's helpers for hashes and
checksums. This slightly optimizes skb fields processing due to faster
checks. Also don't define big static array of ptypes in &idpf_vport --
allocate them dynamically. The pointer to it is anyway cached in
&idpf_rx_queue.

Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com>
Reviewed-by: Jacob Keller <jacob.e.keller@intel.com>
Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2024-07-10 10:43:57 -07:00
Alexander Lobakin
f771314d6b idpf: compile singleq code only under default-n CONFIG_IDPF_SINGLEQ
Currently, all HW supporting idpf supports the singleq model, but none
of it advertises it by default, as splitq is supported and preferred
for multiple reasons. Still, this almost dead code often times adds
hotpath branches and redundant cacheline accesses.
While it can't currently be removed, add CONFIG_IDPF_SINGLEQ and build
the singleq code only when it's enabled manually. This corresponds to
-10 Kb of object code size and a good bunch of hotpath checks.
idpf_is_queue_model_split() works as a gate and compiles out to `true`
when the config option is disabled.

Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com>
Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2024-07-10 10:42:26 -07:00
Alexander Lobakin
14f662b43b idpf: merge singleq and splitq &net_device_ops
It makes no sense to have a second &net_device_ops struct (800 bytes of
rodata) with only one difference in .ndo_start_xmit, which can easily
be just one `if`. This `if` is a drop in the ocean and you won't see
any difference.
Define unified idpf_xmit_start(). The preparation for sending is the
same, just call either idpf_tx_splitq_frame() or idpf_tx_singleq_frame()
depending on the active model to actually map and send the skb.

Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com>
Reviewed-by: Jacob Keller <jacob.e.keller@intel.com>
Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2024-07-10 10:41:06 -07:00
Alexander Lobakin
5a816aae2d idpf: strictly assert cachelines of queue and queue vector structures
Now that the queue and queue vector structures are separated and laid
out optimally, group the fields as read-mostly, read-write, and cold
cachelines and add size assertions to make sure new features won't push
something out of its place and provoke perf regression.
Despite looking innocent, this gives up to 2% of perf bump on Rx.

Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com>
Reviewed-by: Jacob Keller <jacob.e.keller@intel.com>
Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2024-07-10 10:35:39 -07:00
Alexander Lobakin
bf9bf7042a idpf: avoid bloating &idpf_q_vector with big %NR_CPUS
With CONFIG_MAXSMP, sizeof(cpumask_t) is 1 Kb. The queue vector
structure has them embedded, which means 1 additional Kb of not
really hotpath data.
We have cpumask_var_t, which is either an embedded cpumask or a pointer
for allocating it dynamically when it's big. Use it instead of plain
cpumasks and put &idpf_q_vector on a good diet.
Also remove redundant pointer to the interrupt name from the structure.
request_irq() saves it and free_irq() returns it on deinit, so that you
can free the memory.

Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com>
Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2024-07-10 10:34:42 -07:00
Alexander Lobakin
e4891e4687 idpf: split &idpf_queue into 4 strictly-typed queue structures
Currently, sizeof(struct idpf_queue) is 32 Kb.
This is due to the 12-bit hashtable declaration at the end of the queue.
This HT is needed only for Tx queues when the flow scheduling mode is
enabled. But &idpf_queue is unified for all of the queue types,
provoking excessive memory usage.
The unified structure in general makes the code less effective via
suboptimal fields placement. You can't avoid that unless you make unions
each 2 fields. Even then, different field alignment etc., doesn't allow
you to optimize things to the limit.
Split &idpf_queue into 4 structures corresponding to the queue types:
RQ (Rx queue), SQ (Tx queue), FQ (buffer queue), and CQ (completion
queue). Place only needed fields there and shortcuts handy for hotpath.
Allocate the abovementioned hashtable dynamically and only when needed,
keeping &idpf_tx_queue relatively short (192 bytes, same as Rx). This HT
is used only for OOO completions, which aren't really hotpath anyway.
Note that this change must be done atomically, otherwise it's really
easy to get lost and miss something.

Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2024-07-10 10:32:27 -07:00
Alexander Lobakin
66c27e3b19 idpf: stop using macros for accessing queue descriptors
In C, we have structures and unions.
Casting `void *` via macros is not only error-prone, but also looks
confusing and awful in general.
In preparation for splitting the queue structs, replace it with a
union and direct array dereferences.

Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com>
Reviewed-by: Mina Almasry <almasrymina@google.com>
Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2024-07-10 10:31:28 -07:00
Alexander Lobakin
62c884256e libeth: add cacheline / struct layout assertion helpers
Add helpers to assert struct field layout, a bit more crazy and
networking-specific than in <linux/cache.h>. They assume you have
3 CL-aligned groups (read-mostly, read-write, cold) in a struct
you want to assert, and nothing besides them.
For 64-bit with 64-byte cachelines, the assertions are as strict
as possible, as the size can then be easily predicted.
For the rest, make sure they don't cross the specified bound.

Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2024-07-10 10:29:53 -07:00
Alexander Lobakin
39daa09d34 page_pool: use __cacheline_group_{begin, end}_aligned()
Instead of doing __cacheline_group_begin() __aligned(), use the new
__cacheline_group_{begin,end}_aligned(), so that it will take care
of the group alignment itself.
Also replace open-coded `4 * sizeof(long)` in two places with
a definition.

Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2024-07-10 10:28:23 -07:00
Alexander Lobakin
2cb13dec8c cache: add __cacheline_group_{begin, end}_aligned() (+ couple more)
__cacheline_group_begin(), unfortunately, doesn't align the group
anyhow. If it is wanted, then you need to do something like

__cacheline_group_begin(grp) __aligned(ALIGN)

which isn't really convenient nor compact.
Add the _aligned() counterparts to align the groups automatically to
either the specified alignment (optional) or ``SMP_CACHE_BYTES``.
Note that the actual struct layout will then be (on x64 with 64-byte CL):

struct x {
	u32 y;				// offset 0, size 4, padding 56
	__cacheline_group_begin__grp;	// offset 64, size 0
	u32 z;				// offset 64, size 4, padding 4
	__cacheline_group_end__grp;	// offset 72, size 0
	__cacheline_group_pad__grp;	// offset 72, size 0, padding 56
	u32 w;				// offset 128
};

The end marker is aligned to long, so that you can assert the struct
size more strictly, but the offset of the next field in the structure
will be aligned to the group alignment, so that the next field won't
fall into the group it's not intended to.

Add __LARGEST_ALIGN definition and LARGEST_ALIGN() macro.
__LARGEST_ALIGN is the value to which the compilers align fields when
__aligned_largest is specified. Sometimes, it might be needed to get
this value outside of variable definitions. LARGEST_ALIGN() is macro
which just aligns a value to __LARGEST_ALIGN.
Also add SMP_CACHE_ALIGN(), similar to L1_CACHE_ALIGN(), but using
``SMP_CACHE_BYTES`` instead of ``L1_CACHE_BYTES`` as the former
also accounts L2, needed in some cases.

Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com>
Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2024-07-10 10:19:59 -07:00
Kent Overstreet
fd80d14005 bcachefs: fix scheduling while atomic in break_cycle()
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2024-07-10 12:59:28 -04:00
Sebastian Andrzej Siewior
c13fda93ac bpf: Remove tst_run from lwt_seg6local_prog_ops.
The syzbot reported that the lwt_seg6 related BPF ops can be invoked
via bpf_test_run() without without entering input_action_end_bpf()
first.

Martin KaFai Lau said that self test for BPF_PROG_TYPE_LWT_SEG6LOCAL
probably didn't work since it was introduced in commit 04d4b274e2a
("ipv6: sr: Add seg6local action End.BPF"). The reason is that the
per-CPU variable seg6_bpf_srh_states::srh is never assigned in the self
test case but each BPF function expects it.

Remove test_run for BPF_PROG_TYPE_LWT_SEG6LOCAL.

Suggested-by: Martin KaFai Lau <martin.lau@linux.dev>
Reported-by: syzbot+608a2acde8c5a101d07d@syzkaller.appspotmail.com
Fixes: d1542d4ae4 ("seg6: Use nested-BH locking for seg6_bpf_srh_states.")
Fixes: 004d4b274e ("ipv6: sr: Add seg6local action End.BPF")
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/r/20240710141631.FbmHcQaX@linutronix.de
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2024-07-10 09:58:52 -07:00
Kent Overstreet
6f692b1672 bcachefs: Fix RCU splat
Reported-by: syzbot+e74fea078710bbca6f4b@syzkaller.appspotmail.com
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2024-07-10 12:46:22 -04:00
Linus Torvalds
a19ea42149 platform-drivers-x86 for v6.10-6
Highlights:
  -  Fix missing dmi_system_id array termination in toshiba_acpi introduced in 2022
 
 The following is an automated git shortlog grouped by driver:
 
 toshiba_acpi:
  -  Fix array out-of-bounds access
 -----BEGIN PGP SIGNATURE-----
 
 iQFIBAABCAAyFiEEuvA7XScYQRpenhd+kuxHeUQDJ9wFAmaOl6IUHGhkZWdvZWRl
 QHJlZGhhdC5jb20ACgkQkuxHeUQDJ9xSXwgAuGunI5/MqhsyUNDwrC8ZH1MiMUpC
 U81kNnhUn/S4Wu9nu3TunrP99hEAXY//2ImlN6QRiiSXSo95SmRRKSeipV6MeKY5
 bD7HWhUoGA0DApI94nURq85fj56yBJr649R5dEx0TV0DFvFkoZxAUpnHe5m/xtva
 RZbdl8fSpxLqV3fxeXV4b+P3UUDw2DRbI40vCSAJsQ6aJvWwKtENLdvweBJaGyjV
 5ZNJSYd1YRUK7sVoN/cJI4vKmj/qXSU531Y8SeRSzGElMDCwQ9V7kxLq7Df4H68q
 cE8czC7XlVW/t9Vlx0d29v3EzFXvflHjBcHMfpgTwJtPCz+z18rlUXXMtA==
 =f+EK
 -----END PGP SIGNATURE-----

Merge tag 'platform-drivers-x86-v6.10-6' of git://git.kernel.org/pub/scm/linux/kernel/git/pdx86/platform-drivers-x86

Pull x86 platform driver fix from Hans de Goede:
 "One-liner fix for a dmi_system_id array in the toshiba_acpi driver not
  being terminated properly.

  Something which somehow has escaped detection since being introduced
  in 2022 until now"

* tag 'platform-drivers-x86-v6.10-6' of git://git.kernel.org/pub/scm/linux/kernel/git/pdx86/platform-drivers-x86:
  platform/x86: toshiba_acpi: Fix array out-of-bounds access
2024-07-10 09:08:22 -07:00
Linus Torvalds
97488b92e5 ACPI fix for 6.10-rc8
Fix the sorting of _CST output data in the ACPI processor idle
 driver (Kuan-Wei Chiu).
 -----BEGIN PGP SIGNATURE-----
 
 iQJGBAABCAAwFiEE4fcc61cGeeHD/fCwgsRv/nhiVHEFAmaOZu4SHHJqd0Byand5
 c29ja2kubmV0AAoJEILEb/54YlRx3qoQAKHev+jG4IroRaUXgMYdLp4vk9WuBgTo
 eYJgcPgKxr7jAdmfaDbZDHjCLrDbLZvxzVrUSjEfla2UDhSYIOn3DQVImWKWF3fn
 i3xtDwYmxi7+3lmmvzqaVE/Sa6hwycI3eAk1+LfBYlIC8T44FhW8uGiZg0W+1zMW
 MJ61Ea6j16xtvmdtmORRPn7QscaPZc1C4PJjNQ8/yaaI/v4CqwP+7bxQrq/kLdkj
 LH+pQhETcmIcV0u/KJmd/QHcAiU4zFI/hprXVvsiIck/7w/cmwMi17FgTbmxjwRF
 9GlHqM3Sg3TVOiKCeWJVh6MDcNnt4e/6Eb9CupI3KWzEiVk9DUaFBLeMOUcEDwxB
 n0cSJ1WzrDKyJRqWgoRBmbqK+3ywnLpYuNyZf5jQ/o+CfdZOkWkN98kl5jkswOJp
 obXLZT1CYmFmDN4XcI5GGl5v44LJLVVas2Rauc35Z7w/sk5Z1O/bOdLqMphZNkG/
 4B6Vmno4aneAyx2nWIxEmuWF0x5RTwtTwrEsg9r/9wyKwR646XcfXqwuKvmCaIKp
 NVEoZfgzKQ3JpDkHEB/XFmeaSeSv/QIm+8O2or2RXRr7MB7Q5FESRuhvkBsfQxUH
 9dqm3hU+kSd10DrsUojs4T7e67w9/WNCDZDB14EPJ1CTrm0BVdHRb6NK4GOKlnqr
 XQ34U1Vs+iNx
 =/dN3
 -----END PGP SIGNATURE-----

Merge tag 'acpi-6.10-rc8' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm

Pull ACPI fix from Rafael Wysocki:
 "Fix the sorting of _CST output data in the ACPI processor idle driver
  (Kuan-Wei Chiu)"

* tag 'acpi-6.10-rc8' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm:
  ACPI: processor_idle: Fix invalid comparison with insertion sort for latency
2024-07-10 09:05:22 -07:00
Linus Torvalds
130abfe9a1 Power management fixes for 6.10-rc8
Fix two issues related to boost frequencies handling, one in the cpufreq
 core and one in the ACPI cpufreq driver (Mario Limonciello).
 -----BEGIN PGP SIGNATURE-----
 
 iQJGBAABCAAwFiEE4fcc61cGeeHD/fCwgsRv/nhiVHEFAmaOZn8SHHJqd0Byand5
 c29ja2kubmV0AAoJEILEb/54YlRxvHgP/ROsVSyywkXr9KteIgjID0U9Pte8fBAG
 YAGFHTnV+y/+NY47HHW9UCjJb84z8/QvYfzpzH5iRmO9UQBl4Jkn4VIhLOTmWUPw
 i8HhUQFTVhaRMrT6kW+V5AM0jTwUpmwcUMPbFTAPtTSQ2z1nZHjCo3eBZ5K0SZ/r
 HBh2wykMutQrI+S9vWvCBaWQDq8RbmdFngOelsPP56YeeF63RviHsTeVeQ0zsFaK
 OS1RfmlV+Ri22pVAdXvzpI09cbJ6wfHVFOuxIRMF+dbuD6Riloq1jOVHqr5iAXMK
 tRMHTpWCtzC3rYmXU4m6oXjlUOKQ7IotCpROowA5Od05ooXTYChWsH85Msg66qju
 1hC9/6ltZmEptMMfElNJCvdg6U1Y5EEnAJvuRQYluCoz8ZCH+d9I8OKyurA/QRVR
 uLP+pJk2SFxhq79ULLvCSyEWVwqDQ7dvfXvfGU5mfvQJAZc7YGHJFoN0Tor4C8JW
 b7tZwghPly8RsSIfPmWQDUUWQXud+/LTy1AVDhMthDLW6qBKhxPSXzKCt7/J4umx
 n5iHx3YoEunvcISX3gs2IdQdqaay4usCeJF6kbqchMIGXDEC3270SjOIlgDrsAmh
 jFeW7rZt38/GjgEWe0VzE4B3Xvg92f7Fg5/ItXhehkppJAhsEr6A9V1DvJZHyKAm
 D2G0sY6Cm18q
 =rCmw
 -----END PGP SIGNATURE-----

Merge tag 'pm-6.10-rc8' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm

Pull power management fixes from Rafael Wysocki:
 "Fix two issues related to boost frequencies handling, one in the
  cpufreq core and one in the ACPI cpufreq driver (Mario Limonciello)"

* tag 'pm-6.10-rc8' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm:
  cpufreq: ACPI: Mark boost policy as enabled when setting boost
  cpufreq: Allow drivers to advertise boost enabled
2024-07-10 09:03:21 -07:00
Linus Torvalds
d045c46c52 Thermal control fixes for 6.10-rc8
- Prevent the Power Allocator thermal governor from dereferencing a NULL
    pointer if it is bound to a tripless thermal zone (Nícolas Prado).
 
  - Prevent thermal zones enabled too early from staying effectively
    dormant forever because their temperature cannot be determined
    initially (Rafael Wysocki).
 
  - Fix list sorting during thermal zone temperature updates to ensure
    the proper ordering of trip crossing notifications (Rafael Wysocki).
 -----BEGIN PGP SIGNATURE-----
 
 iQJGBAABCAAwFiEE4fcc61cGeeHD/fCwgsRv/nhiVHEFAmaOZdQSHHJqd0Byand5
 c29ja2kubmV0AAoJEILEb/54YlRxV0gQALKvi36z4lCF2NpZtncnS0TCwiqknb3h
 I5W7O28EeED/qKxtUhT0rKaFJA5py9Civ2J1xfccnsO2KLlLtZxzL2yNrHHnewGZ
 SCdVDommTh0zIw01d7h4dzFPE73cYWoX5kwsto9ty0/xi6IBt019LLCIJgB6OmqA
 pu4RoESkhxoVFNrV8dtB7Fj+IT9rIGHtC0c4DZXqIgz6MJkiNAXzwyhL2N7icxlx
 zPhDSBWv2CLTqVxFAFxSc0Hq5FUieU/vMjkrpT4liR4KuTnbqmxOw2pFdgsCf/AJ
 CKhef9aqXeoQYIMTbCreOdgAYMtNekjnUuta8OMwCxop9HCVhh1O1asMURiIX5VT
 8SRal1nDgmTXG5NR0V6TVJ2VYQ6amfqSux0B2lyxcMxr4VsY4kekpsnXXPO68rHB
 ZVCSIza/fH13dyYOrd0GC7Qz2bGRKYstiXXDZc6s69ij7ulDNpG61M49M3W1V7wk
 v2p0SZwjFax3H5DPyd7b9pvBEeAsKGCco2wm/BLauYtnciSsIQBw3Q330DGzsXBm
 EN7vGq8q/w6D6Y3S0syiRyGcaDpDK3FmZerXdASaBRNkWvnXn3fhsOngBY62+3iX
 LqVbvXar2Of//Q9NkvF3S1ko4tJF78vplqz+ScjHUnIE6kfpoby+CVupIcbVXceB
 bcsyCAFYcYlL
 =o+Gb
 -----END PGP SIGNATURE-----

Merge tag 'thermal-6.10-rc8' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm

Pull thermal control fixes from Rafael Wysocki:
 "These fix a possible NULL pointer dereference in a thermal governor,
  fix up the handling of thermal zones enabled before their temperature
  can be determined and fix list sorting during thermal zone temperature
  updates.

  Specifics:

   - Prevent the Power Allocator thermal governor from dereferencing a
     NULL pointer if it is bound to a tripless thermal zone (Nícolas
     Prado)

   - Prevent thermal zones enabled too early from staying effectively
     dormant forever because their temperature cannot be determined
     initially (Rafael Wysocki)

   - Fix list sorting during thermal zone temperature updates to ensure
     the proper ordering of trip crossing notifications (Rafael
     Wysocki)"

* tag 'thermal-6.10-rc8' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm:
  thermal: core: Fix list sorting in __thermal_zone_device_update()
  thermal: core: Call monitor_thermal_zone() if zone temperature is invalid
  thermal: gov_power_allocator: Return early in manage if trip_max is NULL
2024-07-10 09:00:55 -07:00
Linus Torvalds
367cbaad88 Devicetree fix for 6.10, part 2:
- One fix for PASemi Nemo board interrupts
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEktVUI4SxYhzZyEuo+vtdtY28YcMFAmaNuf4ACgkQ+vtdtY28
 YcPFKQ/9HN09dn9ShKggWFJO0nJvJNcI0aObqBynHpsFSVPHiEWPeu8Td144SHvr
 ofc7UmOn80Q14O9wMkcvlZrkPAZJeBKHTwoeZ8L0bmGTwJQ/24ksqsNZGQi8zZDn
 WKhpgh8wNC4VYync51hBm2UpnPbbJJ7mZ/h6KY+TOqs7tl82bJDDmilQzNl8+Z9w
 3i8Qge1yeDmOPVKyx3RAKMu6QcqmAg3B93BVuqUpg3kyO5i4TAZfX8lb7+WI4GOL
 MmfpzRezTrY0SAlFekuJYFND6TovY5rKZHYgxoYwfHZkv9VpnjsoZ4MYpTu0NQ0l
 ZqkpD1ffGRnsYTCbGSlWufVjQT6kmLlJ3PnkuBKcPKQ1tIsECl9eq7C50845Rfma
 Qiyf8T+T3ix/rfN7OjdJsxOIbjJXb9F8nkfhprwpf1AHnsOQyhndPrIhl3NgcGXX
 uJtf+i0FbHPDqZCmMWGx6Z0xqm1jetT3QST5M4fn5rsrdA23yv1lWbuGDu50XYTP
 bsMnNP8zbZrU/oiBfehCSN4OMd13A4IakmZ9oBGIwqgGIjCpqFY4LPWHmFDDHbTv
 UDfmbl4POWClREPDuVCEXaP+imMrcyrGvjev47xEeJTDVt+rTJfTTJ1cOtjI3yfc
 W/xuTkjpvLymP8tzxLEwxsJOtPH6wbn3YQ+rzD5dwE18buucqXM=
 =2zwc
 -----END PGP SIGNATURE-----

Merge tag 'devicetree-fixes-for-6.10-2' of git://git.kernel.org/pub/scm/linux/kernel/git/robh/linux

Pull devicetree fix from Rob Herring:

 - One fix for PASemi Nemo board interrupts

* tag 'devicetree-fixes-for-6.10-2' of git://git.kernel.org/pub/scm/linux/kernel/git/robh/linux:
  of/irq: Disable "interrupt-map" parsing for PASEMI Nemo
2024-07-10 08:58:50 -07:00
Yi Liu
5a88a3f67e vfio/pci: Init the count variable in collecting hot-reset devices
The count variable is used without initialization, it results in mistakes
in the device counting and crashes the userspace if the get hot reset info
path is triggered.

Fixes: f6944d4a0b ("vfio/pci: Collect hot-reset devices to local buffer")
Link: https://bugzilla.kernel.org/show_bug.cgi?id=219010
Reported-by: Žilvinas Žaltiena <zaltys@natrix.lt>
Cc: Beld Zhang <beldzhang@gmail.com>
Signed-off-by: Yi Liu <yi.l.liu@intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Link: https://lore.kernel.org/r/20240710004150.319105-1-yi.l.liu@intel.com
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
2024-07-10 08:47:46 -06:00
Armin Wolf
b6e02c6b03 platform/x86: toshiba_acpi: Fix array out-of-bounds access
In order to use toshiba_dmi_quirks[] together with the standard DMI
matching functions, it must be terminated by a empty entry.

Since this entry is missing, an array out-of-bounds access occurs
every time the quirk list is processed.

Fix this by adding the terminating empty entry.

Reported-by: kernel test robot <oliver.sang@intel.com>
Closes: https://lore.kernel.org/oe-lkp/202407091536.8b116b3d-lkp@intel.com
Fixes: 3cb1f40dfd ("drivers/platform: toshiba_acpi: Call HCI_PANEL_POWER_ON on resume on some models")
Cc: stable@vger.kernel.org
Signed-off-by: Armin Wolf <W_Armin@gmx.de>
Link: https://lore.kernel.org/r/20240709143851.10097-1-W_Armin@gmx.de
Reviewed-by: Hans de Goede <hdegoede@redhat.com>
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
2024-07-10 16:12:12 +02:00
Kent Overstreet
7d7f71cd87 bcachefs: Add missing bch2_trans_begin()
this fixes a 'transaction should be locked' error in backpointers fsck

Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2024-07-10 09:53:39 -04:00
Kent Overstreet
0f6f8f7693 bcachefs: Fix missing error check in journal_entry_btree_keys_validate()
Closes: https://syzkaller.appspot.com/bug?extid=8996d8f176cf946ef641
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2024-07-10 09:53:39 -04:00
Kent Overstreet
f49d2c9835 bcachefs: Warn on attempting a move with no replicas
Instead of popping an assert in bch2_write(), WARN and print out some
debugging info.

Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2024-07-10 09:53:39 -04:00
Kent Overstreet
ad8b68cd39 bcachefs: bch2_data_update_to_text()
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2024-07-10 09:53:39 -04:00
Kent Overstreet
0f1f7324da bcachefs: Log mount failure error code
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2024-07-10 09:53:39 -04:00
Kent Overstreet
8ed58789fc bcachefs: Fix undefined behaviour in eytzinger1_first()
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2024-07-10 09:53:39 -04:00