Commit Graph

1265792 Commits

Author SHA1 Message Date
Jens Axboe
414d0f45c3 io_uring/alloc_cache: switch to array based caching
Currently lists are being used to manage this, but best practice is
usually to have these in an array instead as that it cheaper to manage.

Outside of that detail, games are also played with KASAN as the list
is inside the cached entry itself.

Finally, all users of this need a struct io_cache_entry embedded in
their struct, which is union'ized with something else in there that
isn't used across the free -> realloc cycle.

Get rid of all of that, and simply have it be an array. This will not
change the memory used, as we're just trading an 8-byte member entry
for the per-elem array size.

This reduces the overhead of the recycled allocations, and it reduces
the amount of code code needed to support recycling to about half of
what it currently is.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
2024-04-15 08:10:25 -06:00
Jens Axboe
e10677a8f6 io_uring: drop ->prep_async()
It's now unused, drop the code related to it. This includes the
io_issue_defs->manual alloc field.

While in there, and since ->async_size is now being used a bit more
frequently and in the issue path, move it to io_issue_defs[].

Signed-off-by: Jens Axboe <axboe@kernel.dk>
2024-04-15 08:10:25 -06:00
Jens Axboe
5eff57fa9f io_uring/uring_cmd: defer SQE copying until it's needed
The previous commit turned on async data for uring_cmd, and did the
basic conversion of setting everything up on the prep side. However, for
a lot of use cases, -EIOCBQUEUED will get returned on issue, as the
operation got successfully queued. For that case, a persistent SQE isn't
needed, as it's just used for issue.

Unless execution goes async immediately, defer copying the double SQE
until it's necessary.

This greatly reduces the overhead of such commands, as evidenced by
a perf diff from before and after this change:

    10.60%     -8.58%  [kernel.vmlinux]  [k] io_uring_cmd_prep

where the prep side drops from 10.60% to ~2%, which is more expected.
Performance also rises from ~113M IOPS to ~122M IOPS, bringing us back
to where it was before the async command prep.

Tested-by: Anuj Gupta <anuj20.g@samsung.com>
Reviewed-by: Anuj Gupta <anuj20.g@samsung.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2024-04-15 08:10:25 -06:00
Jens Axboe
d10f19dff5 io_uring/uring_cmd: switch to always allocating async data
Basic conversion ensuring async_data is allocated off the prep path. Adds
a basic alloc cache as well, as passthrough IO can be quite high in rate.

Tested-by: Anuj Gupta <anuj20.g@samsung.com>
Reviewed-by: Anuj Gupta <anuj20.g@samsung.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2024-04-15 08:10:25 -06:00
Jens Axboe
e2ea5a7069 io_uring/net: move connect to always using async data
While doing that, get rid of io_async_connect and just use the generic
io_async_msghdr. Both of them have a struct sockaddr_storage in there,
and while io_async_msghdr is bigger, if the same type can be used then
the netmsg_cache can get reused for connect as well.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
2024-04-15 08:10:25 -06:00
Jens Axboe
d6f911a6b2 io_uring/rw: add iovec recycling
Let the io_async_rw hold on to the iovec and reuse it, rather than always
allocate and free them.

Also enables KASAN for the iovec entries, so that reuse can be detected
even while they are in the cache.

While doing so, shrink io_async_rw by getting rid of the bigger embedded
fast iovec. Since iovecs are being recycled now, shrink it from 8 to 1.
This reduces the io_async_rw size from 264 to 160 bytes, a 40% reduction.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
2024-04-15 08:10:25 -06:00
Jens Axboe
cca6571381 io_uring/rw: cleanup retry path
We no longer need to gate a potential retry on whether or not the
context matches our original task, as all read/write operations have
been fully prepared upfront. This means there's never any re-import
needed, and hence we can always retry requests.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
2024-04-15 08:10:25 -06:00
Jens Axboe
0d10bd77a1 io_uring: get rid of struct io_rw_state
A separate state struct is not needed anymore, just fold it in with
io_async_rw.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
2024-04-15 08:10:25 -06:00
Jens Axboe
a9165b83c1 io_uring/rw: always setup io_async_rw for read/write requests
read/write requests try to put everything on the stack, and then alloc
and copy if a retry is needed. This necessitates a bunch of nasty code
that deals with intermediate state.

Get rid of this, and have the prep side setup everything that is needed
upfront, which greatly simplifies the opcode handlers.

This includes adding an alloc cache for io_async_rw, to make it cheap
to handle.

In terms of cost, this should be basically free and transparent. For
the worst case of {READ,WRITE}_FIXED which didn't need it before,
performance is unaffected in the normal peak workload that is being
used to test that. Still runs at 122M IOPS.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
2024-04-15 08:10:25 -06:00
Jens Axboe
d80f940701 io_uring/net: drop 'kmsg' parameter from io_req_msg_cleanup()
Now that iovec recycling is being done, the iovec is no longer being
freed in there. Hence the kmsg parameter is now useless.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
2024-04-15 08:10:25 -06:00
Jens Axboe
7519134178 io_uring/net: add iovec recycling
Right now the io_async_msghdr is recycled to avoid the overhead of
allocating+freeing it for every request. But the iovec is not included,
hence that will be allocated and freed for each transfer regardless.
This commit enables recyling of the iovec between io_async_msghdr
recycles. This avoids alloc+free for each one if an iovec is used, and
on top of that, it extends the cache hot nature of msg to the iovec as
well.

Also enables KASAN for the iovec entries, so that reuse can be detected
even while they are in the cache.

The io_async_msghdr also shrinks from 376 -> 288 bytes, an 88 byte
saving (or ~23% smaller), as the fast_iovec entry is dropped from 8
entries to a single entry. There's no point keeping a big fast iovec
entry, if iovecs aren't being allocated and freed continually.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
2024-04-15 08:10:25 -06:00
Jens Axboe
9f8539fe29 io_uring/net: remove (now) dead code in io_netmsg_recycle()
All net commands have async data at this point, there's no reason to
check if this is the case or not.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
2024-04-15 08:10:25 -06:00
Jens Axboe
6498c5c97c io_uring: kill io_msg_alloc_async_prep()
We now ONLY call io_msg_alloc_async() from inside prep handling, which
is always locked. No need for this helper anymore, or the check in
io_msg_alloc_async() on whether the ring is locked or not.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
2024-04-15 08:10:25 -06:00
Jens Axboe
50220d6ac8 io_uring/net: get rid of ->prep_async() for send side
Move the io_async_msghdr out of the issue path and into prep handling,
e it's now done unconditionally and hence does not need to be part
of the issue path. This means any usage of io_sendrecv_prep_async() and
io_sendmsg_prep_async(), and hence the forced async setup path is now
unified with the normal prep setup.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
2024-04-15 08:10:25 -06:00
Jens Axboe
c6f32c7d9e io_uring/net: get rid of ->prep_async() for receive side
Move the io_async_msghdr out of the issue path and into prep handling,
since it's now done unconditionally and hence does not need to be part
of the issue path. This reduces the footprint of the multishot fast
path of multiple invocations of ->issue() per prep, and also means that
using ->prep_async() can be dropped for recvmsg asthis is now done via
setup on the prep side.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
2024-04-15 08:10:25 -06:00
Jens Axboe
3ba8345aec io_uring/net: always set kmsg->msg.msg_control_user before issue
We currently set this separately for async/sync entry, but let's just
move it to a generic pre-issue spot and eliminate the difference
between the two.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
2024-04-15 08:10:25 -06:00
Jens Axboe
790b68b32a io_uring/net: always setup an io_async_msghdr
Rather than use an on-stack one and then need to allocate and copy if
async execution is required, always grab one upfront. This should be
very cheap, and potentially even have cache hotness benefits for
back-to-back send/recv requests.

For any recv type of request, this is probably a good choice in general,
as it's expected that no data is available initially. For send this is
not necessarily the case, as space in the socket buffer is expected to
be available. However, getting a cached io_async_msghdr is very cheap,
and as it should be cache hot, probably the difference here is neglible,
if any.

A nice side benefit is that io_setup_async_msg can get killed
completely, which has some nasty iovec manipulation code.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
2024-04-15 08:10:25 -06:00
Jens Axboe
f5b00ab222 io_uring/net: unify cleanup handling
Now that recv/recvmsg both do the same cleanup, put it in the retry and
finish handlers.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
2024-04-15 08:10:25 -06:00
Jens Axboe
4a3223f7bf io_uring/net: switch io_recv() to using io_async_msghdr
No functional changes in this patch, just in preparation for carrying
more state than what is available now, if necessary.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
2024-04-15 08:10:25 -06:00
Jens Axboe
54cdcca05a io_uring/net: switch io_send() and io_send_zc() to using io_async_msghdr
No functional changes in this patch, just in preparation for carrying
more state then what is being done now, if necessary. While unifying
some of this code, add a generic send setup prep handler that they can
both use.

This gets rid of some manual msghdr and sockaddr on the stack, and makes
it look a bit more like the sendmsg/recvmsg variants. Going forward, more
can get unified on top.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
2024-04-15 08:10:25 -06:00
Jens Axboe
0ae9b9a14d io_uring/alloc_cache: shrink default max entries from 512 to 128
In practice, we just need to recycle a few elements for (by far) most
use cases. Shrink the total size down from 512 to 128, which should be
more than plenty.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
2024-04-15 08:10:25 -06:00
Jens Axboe
29f858a7c6 io_uring: remove timeout/poll specific cancelations
For historical reasons these were special cased, as they were the only
ones that needed cancelation. But now we handle cancelations generally,
and hence there's no need to check for these in
io_ring_ctx_wait_and_kill() when io_uring_try_cancel_requests() handles
both these and the rest as well.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
2024-04-15 08:10:25 -06:00
Jens Axboe
2541762342 io_uring: flush delayed fallback task_work in cancelation
Just like we run the inline task_work, ensure we also factor in and
run the fallback task_work.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
2024-04-15 08:10:24 -06:00
Pavel Begunkov
c133b3b06b io_uring: clean up io_lockdep_assert_cq_locked
Move CONFIG_PROVE_LOCKING checks inside of io_lockdep_assert_cq_locked()
and kill the else branch.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Tested-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/bbf33c429c9f6d7207a8fe66d1a5866ec2c99850.1710799188.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2024-04-15 08:10:24 -06:00
Pavel Begunkov
0667db14e1 io_uring: refactor io_req_complete_post()
Make io_req_complete_post() to push all IORING_SETUP_IOPOLL requests
to task_work, it's much cleaner and should normally happen. We couldn't
do it before because there was a possibility of looping in

complete_post() -> tw -> complete_post() -> ...

Also, unexport the function and inline __io_req_complete_post().

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Tested-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/ea19c032ace3e0dd96ac4d991a063b0188037014.1710799188.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2024-04-15 08:10:24 -06:00
Pavel Begunkov
23fbdde620 io_uring: remove current check from complete_post
task_work execution is now always locked, and we shouldn't get into
io_req_complete_post() from them. That means that complete_post() is
always called out of the original task context and we don't even need to
check current.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Tested-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/24ec27f27db0d8f58c974d8118dca1d345314ddc.1710799188.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2024-04-15 08:10:24 -06:00
Pavel Begunkov
902ce82c2a io_uring: get rid of intermediate aux cqe caches
io_post_aux_cqe(), which is used for multishot requests, delays
completions by putting CQEs into a temporary array for the purpose
completion lock/flush batching.

DEFER_TASKRUN doesn't need any locking, so for it we can put completions
directly into the CQ and defer post completion handling with a flag.
That leaves !DEFER_TASKRUN, which is not that interesting / hot for
multishot requests, so have conditional locking with deferred flush
for them.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Tested-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/b1d05a81fd27aaa2a07f9860af13059e7ad7a890.1710799188.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2024-04-15 08:10:24 -06:00
Pavel Begunkov
e5c12945be io_uring: refactor io_fill_cqe_req_aux
The restriction on multishot execution context disallowing io-wq is
driven by rules of io_fill_cqe_req_aux(), it should only be called in
the master task context, either from the syscall path or in task_work.
Since task_work now always takes the ctx lock implying
IO_URING_F_COMPLETE_DEFER, we can just assume that the function is
always called with its defer argument set to true.

Kill the argument. Also rename the function for more consistency as
"fill" in CQE related functions was usually meant for raw interfaces
only copying data into the CQ without any locking, waking the user
and other accounting "post" functions take care of.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Tested-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/93423d106c33116c7d06bf277f651aa68b427328.1710799188.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2024-04-15 08:10:24 -06:00
Pavel Begunkov
8e5b3b89ec io_uring: remove struct io_tw_state::locked
ctx is always locked for task_work now, so get rid of struct
io_tw_state::locked. Note I'm stopping one step before removing
io_tw_state altogether, which is not empty, because it still serves the
purpose of indicating which function is a tw callback and forcing users
not to invoke them carelessly out of a wrong context. The removal can
always be done later.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Tested-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/e95e1ea116d0bfa54b656076e6a977bc221392a4.1710799188.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2024-04-15 08:10:24 -06:00
Pavel Begunkov
92219afb98 io_uring: force tw ctx locking
We can run normal task_work without locking the ctx, however we try to
lock anyway and most handlers prefer or require it locked. It might have
been interesting to multi-submitter ring with high contention completing
async read/write requests via task_work, however that will still need to
go through io_req_complete_post() and potentially take the lock for
rsrc node putting or some other case.

In other words, it's hard to care about it, so alawys force the locking.
The case described would also because of various io_uring caches.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Tested-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/6ae858f2ef562e6ed9f13c60978c0d48926954ba.1710799188.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2024-04-15 08:10:24 -06:00
Pavel Begunkov
6e6b8c6212 io_uring/rw: avoid punting to io-wq directly
kiocb_done() should care to specifically redirecting requests to io-wq.
Remove the hopping to tw to then queue an io-wq, return -EAGAIN and let
the core code io_uring handle offloading.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Tested-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/413564e550fe23744a970e1783dfa566291b0e6f.1710799188.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2024-04-15 08:10:24 -06:00
Jens Axboe
1afdb76038 nvme/io_uring: use helper for polled completions
NVMe is making up issue_flags, which is a no-no in general, and to make
matters worse, they are completely the wrong ones. For a pure polled
request, which it does check for, we're already inside the
ctx->uring_lock when the completions are run off io_do_iopoll(). Hence
the correct flag would be '0' rather than IO_URING_F_UNLOCKED.

Reviewed-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2024-04-15 08:10:24 -06:00
Pavel Begunkov
36a005b9c6 io_uring/cmd: document some uring_cmd related helpers
Add comments warning users that they're only allowed to pass issue_flags
that were given from io_uring.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Tested-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/82ff8a45f2c3eb5f3a04a33f0692e5e4a1320455.1710799188.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2024-04-15 08:10:24 -06:00
Pavel Begunkov
e1eef2e56c io_uring/cmd: fix tw <-> issue_flags conversion
!IO_URING_F_UNLOCKED does not translate to availability of the deferred
completion infra, IO_URING_F_COMPLETE_DEFER does, that what we should
pass and look for to use io_req_complete_defer() and other variants.

Luckily, it's not a real problem as two wrongs actually made it right,
at least as far as io_uring_cmd_work() goes.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Tested-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/aef76d34fe9410df8ecc42a14544fd76cd9d8b9e.1710799188.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2024-04-15 08:10:24 -06:00
Pavel Begunkov
6edd953b6e io_uring/cmd: kill one issue_flags to tw conversion
io_uring cmd converts struct io_tw_state to issue_flags and later back
to io_tw_state, it's awfully ill-fated, not to mention that intermediate
issue_flags state is not correct.

Get rid of the last conversion, drag through tw everything that came
with IO_URING_F_UNLOCKED, and replace io_req_complete_defer() with a
direct call to io_req_complete_defer(), at least for the time being.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Tested-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/c53fa3df749752bd058cf6f824a90704822d6bcc.1710799188.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2024-04-15 08:10:24 -06:00
Pavel Begunkov
da12d9ab58 io_uring/cmd: move io_uring_try_cancel_uring_cmd()
io_uring_try_cancel_uring_cmd() is a part of the cmd handling so let's
move it closer to all cmd bits into uring_cmd.c

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Tested-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/43a3937af4933655f0fd9362c381802f804f43de.1710799188.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2024-04-15 08:10:24 -06:00
Linus Torvalds
0bbac3facb Linux 6.9-rc4 2024-04-14 13:38:39 -07:00
Linus Torvalds
72374d71c3 Get rid of lockdep false positives around sysfs/overlayfs
-----BEGIN PGP SIGNATURE-----
 
 iHUEABYIAB0WIQQqUNBr3gm4hGXdBJlZ7Krx/gZQ6wUCZhu2kwAKCRBZ7Krx/gZQ
 62gzAP9eeADy6rQkzgWJ8d8sKzGfmd0nup9WlCOxZSR0XojTXwEAnue47dn7PlMx
 wQY0joZ0V5FO8PNTEbWc2P/dSQrANgc=
 =MshW
 -----END PGP SIGNATURE-----

Merge tag 'pull-sysfs-annotation-fix' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs

Pull sysfs fix from Al Viro:
 "Get rid of lockdep false positives around sysfs/overlayfs

  syzbot has uncovered a class of lockdep false positives for setups
  with sysfs being one of the backing layers in overlayfs. The root
  cause is that of->mutex allocated when opening a sysfs file read-only
  (which overlayfs might do) is confused with of->mutex of a file opened
  writable (held in write to sysfs file, which overlayfs won't do).

  Assigning them separate lockdep classes fixes that bunch and it's
  obviously safe"

* tag 'pull-sysfs-annotation-fix' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs:
  kernfs: annotate different lockdep class for of->mutex of writable files
2024-04-14 11:41:51 -07:00
Linus Torvalds
27fd80851d Misc x86 fixes:
- Follow up fixes for the BHI mitigations code.
 
  - Fix !SPECULATION_MITIGATIONS bug not turning off
    mitigations as expected.
 
  - Work around an APIC emulation bug when the kernel is built with
    Clang and run as a SEV guest.
 
  - Follow up x86 topology fixes.
 
 Note that there's minor cleanups included in the BHI fixes,
 which we'd normally delay to the next merge window, but the
 BHI mitigations code is new and will be backported widely,
 so we thought it would be better to have a unified codebase
 at this stage. (Let me know if that assumption is wrong and
 I'll rebase it.)
 
 Signed-off-by: Ingo Molnar <mingo@kernel.org>
 -----BEGIN PGP SIGNATURE-----
 
 iQJFBAABCgAvFiEEBpT5eoXrXCwVQwEKEnMQ0APhK1gFAmYbmmkRHG1pbmdvQGtl
 cm5lbC5vcmcACgkQEnMQ0APhK1iArg//R2VYSVgAfUf4cN3rTm8RuX8U/8V4A1C/
 FklDWziEWTw8DhQCidiIt7ETuMjvOf7HFzVTfLP+oOiBxHGMCEjxtf74mHRaeZ+f
 avPvwOfp3mrIdn/h2+t7pSYxZDf310EDvarliIXq0z2hmNjmQjKXo3dyWNiWJe9i
 LFzST8dRyR0Tg4MjLuY2g9ZEauRHpY6aAVk9UrRi7uaiLyoHnLBASn1DCL1pmMhT
 cqKfHhilkeUMYwTXbDTu1iIQwBHqcpCmUp2h6VPpxPkTxJXwzo0E4lVuFVu7tzUM
 yrMZrNxhtC5Cg9NVF56AqZocKjutlJfWXnmuvpc+dM7z1dF/EwFbx2ZM+3PDuw8Z
 uODs6bVzYlhzWxTMb9Obsp3RvHe1B7ZCFCZ8uo3G9lMFXqu047UwfuwqnZw4YpX6
 CDEKz3zLbV4s64HPlvbju3CpX+m9CXhg5cR6HfCJiwYCytb1bxZU0ottnPnYxVCb
 9Th7wi2f1sCZvtQ/T8aZpF7FhYe7abt/CDvlDoDxiRg1f+Z2nduznfDJH81nn1KS
 duZEicG0O+iYi9OCPVBTVutRxSWA6D8D4F0VN7cDRr4QHyXpOFJ1KsZyHBbZo9I6
 ubp9RiFNaocgdWiWVgEpvvmlYpVPDnR38oVHNPpNAyIYHu3mlCoS76znZTNX8WfJ
 GlvHqOp+EM0=
 =2ig7
 -----END PGP SIGNATURE-----

Merge tag 'x86-urgent-2024-04-14' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull misc x86 fixes from Ingo Molnar:

 - Follow up fixes for the BHI mitigations code

 - Fix !SPECULATION_MITIGATIONS bug not turning off mitigations as
   expected

 - Work around an APIC emulation bug when the kernel is built with Clang
   and run as a SEV guest

 - Follow up x86 topology fixes

* tag 'x86-urgent-2024-04-14' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  x86/cpu/amd: Move TOPOEXT enablement into the topology parser
  x86/cpu/amd: Make the NODEID_MSR union actually work
  x86/cpu/amd: Make the CPUID 0x80000008 parser correct
  x86/bugs: Replace CONFIG_SPECTRE_BHI_{ON,OFF} with CONFIG_MITIGATION_SPECTRE_BHI
  x86/bugs: Remove CONFIG_BHI_MITIGATION_AUTO and spectre_bhi=auto
  x86/bugs: Clarify that syscall hardening isn't a BHI mitigation
  x86/bugs: Fix BHI handling of RRSBA
  x86/bugs: Rename various 'ia32_cap' variables to 'x86_arch_cap_msr'
  x86/bugs: Cache the value of MSR_IA32_ARCH_CAPABILITIES
  x86/bugs: Fix BHI documentation
  x86/cpu: Actually turn off mitigations by default for SPECULATION_MITIGATIONS=n
  x86/topology: Don't update cpu_possible_map in topo_set_cpuids()
  x86/bugs: Fix return type of spectre_bhi_state()
  x86/apic: Force native_apic_mem_read() to use the MOV instruction
2024-04-14 10:48:51 -07:00
Linus Torvalds
c748fc3b1f Misc timer fixes:
- Address a (valid) W=1 build warning
 
  - Fix timer self-tests
 
  - Annotate a KCSAN warning wrt. accesses to the
    tick_do_timer_cpu global variable.
 
  - Address a !CONFIG_BUG build warning
 
 Heads up for the !CONFIG_BUG warning patch, which we
 addressed with:
 
    5284984a4f bug: Fix no-return-statement warning with !CONFIG_BUG
 
 Not everyone agreed though, see:
 
   https://lore.kernel.org/all/20240410153212.127477-1-adrian.hunter@intel.com
 
 Signed-off-by: Ingo Molnar <mingo@kernel.org>
 -----BEGIN PGP SIGNATURE-----
 
 iQJFBAABCgAvFiEEBpT5eoXrXCwVQwEKEnMQ0APhK1gFAmYblgkRHG1pbmdvQGtl
 cm5lbC5vcmcACgkQEnMQ0APhK1jYWg//eNeJkdzJVdbj6g4n2t3WDDuX7dxuRqdG
 AQHJdZctG+kNZBp+U2Zvbb8BDZfDRSQDBDfQI0ck3xG314pzXzNg92YMJB95r/Zf
 aRcxMSFc3a2dN3vW97UDKquPuCarCPsZQvbQKmZ55OmgW6ZRhhsjed0f18Nq63xR
 oWrQ0rotNhMJ98dpSOfPqrMoCXza78P/7nA49LxVIQcuDb+dtyqVTuAbENOOkFYq
 nqAkvuieZGzLb4nKH2d1rK4agYuXwnMLJ71MOcCNWFp8njuRRx+Yc+3gyoNl7e9E
 ipd6DcelOEl/DaYRao9rRy3ij0veJoUvshKZBTEWPw9FQU24odwqX4p/Mj2vF1iN
 KExtF+S7LBxdJAdivHyuPtt9B0rKRmgIp/Q8Ytgzuxu9rZ3LNev+7l80qDOIM8MF
 Mozv6JsJN2sVOMWvnzF9B1WNjVSikcyuvd2JRPbQYh1zy8aCpFHhZY+LcvK3vYBQ
 qdzY8o5dmIW0JrtHZw4H7tqKByUKEbJMsslPefD9qNIq5bpAUgHi7HFOMTU0kOvx
 2rFDnC6cJk39CXyJrpLMyKDqZzDTHGV/J4nV7/L7vQzy3iIOcfVcszfGESaM/txk
 6cgdncf9pr8aOE34A6/5Kr4L45vgh7B6YGc4oqHpdlvFLR0ve0gi+BIjNja8Jy7C
 IwGsS2uloCA=
 =oEym
 -----END PGP SIGNATURE-----

Merge tag 'timers-urgent-2024-04-14' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull timer fixes from Ingo Molnar:

 - Address a (valid) W=1 build warning

 - Fix timer self-tests

 - Annotate a KCSAN warning wrt. accesses to the tick_do_timer_cpu
   global variable

 - Address a !CONFIG_BUG build warning

* tag 'timers-urgent-2024-04-14' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  selftests: kselftest: Fix build failure with NOLIBC
  selftests: timers: Fix abs() warning in posix_timers test
  selftests: kselftest: Mark functions that unconditionally call exit() as __noreturn
  selftests: timers: Fix posix_timers ksft_print_msg() warning
  selftests: timers: Fix valid-adjtimex signed left-shift undefined behavior
  bug: Fix no-return-statement warning with !CONFIG_BUG
  timekeeping: Use READ/WRITE_ONCE() for tick_do_timer_cpu
  selftests/timers/posix_timers: Reimplement check_timer_distribution()
  irqflags: Explicitly ignore lockdep_hrtimer_exit() argument
2024-04-14 10:32:22 -07:00
Linus Torvalds
a1505c47e7 Fix the x86 PMU multi-counter code returning invalid
data in certain circumstances.
 
 Signed-off-by: Ingo Molnar <mingo@kernel.org>
 -----BEGIN PGP SIGNATURE-----
 
 iQJFBAABCgAvFiEEBpT5eoXrXCwVQwEKEnMQ0APhK1gFAmYbkE8RHG1pbmdvQGtl
 cm5lbC5vcmcACgkQEnMQ0APhK1gZURAAjtG4+vnthjGFIC9HSKPFa95aOhAd0tCE
 NfW8HDTuA1zhK0MoKU2cQkiE/1LVcrSS60UomYUhwB7Hrdx+HCz81W9CMrwyzkY1
 0MhiUiq2cSYo0FsWflzipOoFffyY2THrlimqILQmhzo5JlbYA8WNTxAWs6Th651e
 an5GZJvPq98LhhWEbULiiT+2GGYi4CtbVtZvo+FyCJvYcgxF70EeJ/jvxBUax4yW
 23r+uxruo3kuiJoBAs3+OBWt/C+Ij/YF9tKqOW1XdXpPxYAbGKtYw2Ck0EIkSVHS
 kilU0ig4TMRCUmUXXSqWnTxheZBF7eXwu9cMKdnqydLjazuO5M8uh+yiZEuA/UFA
 I5YGBmuYe/Q+CaZmobFtRyvRZZE3kF1xTxyyw2vJMDjbW1FOppXPrdFrnbav5/h0
 pDxBz7H5f6L/Egyi173cRY8dzcKTjP0adtczb1M5Q6BxnuCO4I0P7RUIg41tabxg
 YuojtmIGAjhMXVXmC9VUabVDzB2o5vYXbZ4xTM6uG5XG7/2IX0nOJidwhjotw+1D
 6LK4U1kcaY8kqolYiUt5zyEc96CPvFwM7w8592Ohxs5arROYEgz0QnjGnmRu8OsP
 yV5U8lz+aoX2Cwh6n06XJB5GJqWX5BzkJ+D9q/mRqPRL8/BDscD2WcHkKdRE89T6
 YkecmTrHYD0=
 =L86s
 -----END PGP SIGNATURE-----

Merge tag 'perf-urgent-2024-04-14' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull perf event fix from Ingo Molnar:
 "Fix the x86 PMU multi-counter code returning invalid data in certain
  circumstances"

* tag 'perf-urgent-2024-04-14' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  perf/x86: Fix out of range data
2024-04-14 10:26:27 -07:00
Linus Torvalds
fa37b3be18 Fix a PREEMPT_RT build bug.
Signed-off-by: Ingo Molnar <mingo@kernel.org>
 -----BEGIN PGP SIGNATURE-----
 
 iQJFBAABCgAvFiEEBpT5eoXrXCwVQwEKEnMQ0APhK1gFAmYbjHERHG1pbmdvQGtl
 cm5lbC5vcmcACgkQEnMQ0APhK1gcpA//am9G8j1scrpDIdlcvqavtw2vN70mlqDo
 Pu3Dbu3T/SjI1RHLrGoT7nzt9JofCrEEfeE3GbqNgwUiOA3TNMwSPKRJXY1t9Sn3
 Cux/UkDt5LNMyICToJdB3rWEHuHN8d4Iej0PtBAqJfgOaNKoLTyV9wx7eQEM2lQJ
 w0UVZjz01k31i0RHcjxghqcWaAEHUzdfrGhrC8UUp9M1PUK6ZlTZjBGIpSnMQCEN
 CLDK0Vyo/vk3DkRWATzclKfuMjfRnZa6Xmc1jbZwU1riwC4I2NtzvpxrrA1d3JPz
 2x8Vn2z9pr3Z16OMvvL97QB+xffaysP7RfFnK2AHLxo23HIGzdHgYmB34X7ZyU6M
 4x48lWtfhXzSZjEJ/FunnsBNGbyeNeBoYFKtfSrc2aMtMBeeZx3gHuwZIESz34xg
 b8Tjj5Lju6slaULd0aWR1vkFvYrZVr6uqjtpat2Y+m6bG/+Ld+vittF/6fdrN9lx
 s51rb5TKVmsykCiH+Zrrwdh5WGftxDXZTuAwkPOzbLTNFT5Y898uClvXTcGNfGCg
 dfMKUt8HmfKbP+0A1IvKKJbpO6TfwIR1WaB87fp9Bc6vtOBL/dk8Tc722i76u/1k
 xbxNC2B3dL9kEz9VbKBGNVEIB+XvKrtPDEAQp8wZF3AEL/EoAA/49gsdHYOVCnWH
 1CvGhfap4DM=
 =YUeL
 -----END PGP SIGNATURE-----

Merge tag 'locking-urgent-2024-04-14' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull locking fix from Ingo Molnar:
 "Fix a PREEMPT_RT build bug"

* tag 'locking-urgent-2024-04-14' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  locking: Make rwsem_assert_held_write_nolockdep() build with PREEMPT_RT=y
2024-04-14 10:13:56 -07:00
Linus Torvalds
c28275e743 Fix a bug in the GIC irqchip driver.
Signed-off-by: Ingo Molnar <mingo@kernel.org>
 -----BEGIN PGP SIGNATURE-----
 
 iQJFBAABCgAvFiEEBpT5eoXrXCwVQwEKEnMQ0APhK1gFAmYbiokRHG1pbmdvQGtl
 cm5lbC5vcmcACgkQEnMQ0APhK1gJYA/9H/PcIkd4sxL6gamb6JPAlahgQOm3BUft
 MvnJERyn1AZDLQXW2P2BybMnT2CfuAZSUlt9Na+SCMkUKlN/72whJ+IEfzQx9OIW
 XdrbXOs62CBPSCJY+elvIj7JQkSLulwJBgo7OuSNMyZVjmsEKPmaqcWpbGFmLhTw
 xwE4hO/mU03IBm9KIcH0qc8ryhsbHmC/EjyahWWWy3HpS8/ZqrkXGUhFGkslIZ5B
 lM8xr5BmbK8rCw+y9NQlLfZDqcFPmtqvt+BhXsA2H9Iv22lYUUECra3NqGbpVKrd
 qSCUwmTHMMxsr+Mvr6uOudNNAJG467xG9HzLntnDRxLEGBuNgch5xBS6tDzJ9Cv1
 l3khrsRsNt4kcMtKJxSip+VVZyBz+KMuj7ZPEH+Whq/MEYeQ0UZPaGIe5+gZa6S2
 LExKDRhedjGo03fA8BquLlpV9kFoHUuQpJOv0eVs8xK98t+Rdf4UaE+cRZSz96Xg
 aF7ztg/UgduXpLzY7CdKf3k1O5jBClxM6QyrjC9kFv48YwqXG9cAkkhNHHvcgQ7q
 CzVe36u44SRW2dcCaFzA2wTCRlKwwIupcdeWZL3+5sgta5FnLRShxiZBhA4NZoZ4
 tTYDFqmVORrf+we3tzyyeC/ye+222xaY8D9Qo+UvDhNXkjjK9ncyKLuPwSCinGQo
 uRvI8HxubuI=
 =K0bw
 -----END PGP SIGNATURE-----

Merge tag 'irq-urgent-2024-04-14' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull irq fix from Ingo Molnar:
 "Fix a bug in the GIC irqchip driver"

* tag 'irq-urgent-2024-04-14' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  irqchip/gic-v3-its: Fix VSYNC referencing an unmapped VPE on GIC v4.1
2024-04-14 10:12:34 -07:00
Linus Torvalds
399f4dae68 virtio: bugfixes
Some small, obvious (in hindsight) bugfixes:
 
 - new ioctl in vhost-vdpa has a wrong # - not too late to fix
 
 - vhost has apparently been lacking an smp_rmb() -
   due to code duplication :( The duplication will be fixed in
   the next merge cycle, this is a minimal fix.
 
 - an error message in vhost talks about guest moving used index -
   which of course never happens, guest only ever moves the
   available index.
 
 - i2c-virtio didn't set the driver owner so it did not get
   refcounted correctly.
 
 Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
 -----BEGIN PGP SIGNATURE-----
 
 iQFDBAABCAAtFiEEXQn9CHHI+FuUyooNKB8NuNKNVGkFAmYTqN4PHG1zdEByZWRo
 YXQuY29tAAoJECgfDbjSjVRpMaAH/iEP8RA0casqGCVvHsAn0LXQlFMtL8GemYbY
 EX26EL8vfQZGYQ3GoZQpWUR63Oiaaptu0r+phAtMktRj/QeSWQtCM0XX+cV8xiwV
 3dFOosMYYqDihXraDnrpOqYnl0jc4cF3lrAf+WEFxNokdRSLP6hnXqmokmQI/YVG
 mhgnnRnhWQInkfPAzkYqEjQKhssFnKhiRGyV6saXmFHf+lUE68mBQ9NO42ogW74m
 FWMaEd7r6rfimjDYjotOgISP1xH0G17FudKoYly5Ymf1PSZbrinO+nU0rtklXxEG
 qIrMXQHzHPhRBP5eCU7qhOO4cPq4UphB0tY7ow4Wqr1e0eymauw=
 =dpxf
 -----END PGP SIGNATURE-----

Merge tag 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost

Pull virtio bugfixes from Michael Tsirkin:
 "Some small, obvious (in hindsight) bugfixes:

   - new ioctl in vhost-vdpa has a wrong # - not too late to fix

   - vhost has apparently been lacking an smp_rmb() - due to code
     duplication :( The duplication will be fixed in the next merge
     cycle, this is a minimal fix

   - an error message in vhost talks about guest moving used index -
     which of course never happens, guest only ever moves the available
     index

   - i2c-virtio didn't set the driver owner so it did not get refcounted
     correctly"

* tag 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost:
  vhost: correct misleading printing information
  vhost-vdpa: change ioctl # for VDPA_GET_VRING_SIZE
  virtio: store owner from modules with register_virtio_driver()
  vhost: Add smp_rmb() in vhost_enable_notify()
  vhost: Add smp_rmb() in vhost_vq_avail_empty()
2024-04-14 10:05:59 -07:00
Linus Torvalds
ddd7ad5cf1 dma-mapping fixes for Linux 6.9
- fix up swiotlb buffer padding even more (Petr Tesarik)
  - fix for partial dma_sync on swiotlb (Michael Kelley)
  - swiotlb debugfs fix (Dexuan Cui)
 -----BEGIN PGP SIGNATURE-----
 
 iQI/BAABCgApFiEEgdbnc3r/njty3Iq9D55TZVIEUYMFAmYbZMsLHGhjaEBsc3Qu
 ZGUACgkQD55TZVIEUYMzpw/9EA2Hfo0QRokyvYzHA9ks1JeMZMeV5fzJEb/NYb+V
 6cX2U0jRsdRqsZZXKl9c/oXwhiGLnQtuA9pIsMc18hqZooWxzoFWuc438UUihfvk
 6BVmciBl13gm0Addd5ajDVV7tPLWGmFY/rSNyD0yhPRlRy4PmVYMtaWpX+ec0jnW
 YUTSI+UTjknl7H5TXVDg/JmyHB7xTjGCAI75kMgxNK4PCX5oF5uqVpXZcF1YH5sj
 STQuUrLU/jE54G6X3sM44GJntDWq3RBRI2pTsgQC70MAdrrPm2g60BorccA8qx38
 Kz5CII8KuUvkIOtvncNmEzlGlfwflx91nF5ItQtJNobrd5GMSNtzaSLDR0LyfzLF
 w+awHYZyReEwE5oYcN8VQQR4mOZnqi3VRK+vTlTjcHXyXY3k0LdW+Jxzd6TkAeqr
 49ECkVHpw4QaQBKH6RWkED1Xd0CDAABx2HJ2QkHYEwwLL/KPvaehExY9LjDpQut4
 qJaAqLPKzxvAJdFjvH4HlJDhpw1BH0Pj3H6i4y0UgEzOKReMGun1FoqCseSJhhGc
 laFtFrIpooLqOD/h0e21XpVfEcRDkLwEjAtpkZXrfBZenZF8rFDm+vTOpBiyAK2a
 2KiU0+z/f1Flx5E0gLtxn5yPsWo2cuIjKfsVSewIwjrh6p/XFBF+iQi4dwISFxKz
 JrE=
 =U10z
 -----END PGP SIGNATURE-----

Merge tag 'dma-maping-6.9-2024-04-14' of git://git.infradead.org/users/hch/dma-mapping

Pull dma-mapping fixes from Christoph Hellwig:

 - fix up swiotlb buffer padding even more (Petr Tesarik)

 - fix for partial dma_sync on swiotlb (Michael Kelley)

 - swiotlb debugfs fix (Dexuan Cui)

* tag 'dma-maping-6.9-2024-04-14' of git://git.infradead.org/users/hch/dma-mapping:
  swiotlb: do not set total_used to 0 in swiotlb_create_debugfs_files()
  swiotlb: fix swiotlb_bounce() to do partial sync's correctly
  swiotlb: extend buffer pre-padding to alloc_align_mask if necessary
2024-04-14 10:02:40 -07:00
Amir Goldstein
16b52bbee4 kernfs: annotate different lockdep class for of->mutex of writable files
The writable file /sys/power/resume may call vfs lookup helpers for
arbitrary paths and readonly files can be read by overlayfs from vfs
helpers when sysfs is a lower layer of overalyfs.

To avoid a lockdep warning of circular dependency between overlayfs
inode lock and kernfs of->mutex, use a different lockdep class for
writable and readonly kernfs files.

Reported-by: syzbot+9a5b0ced8b1bfb238b56@syzkaller.appspotmail.com
Fixes: 0fedefd4c4 ("kernfs: sysfs: support custom llseek method for sysfs entries")
Suggested-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Amir Goldstein <amir73il@gmail.com>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2024-04-14 06:55:46 -04:00
Linus Torvalds
7efd0a7403 ata fixes for 6.9-rc4
- Add the mask_port_map parameter to the ahci driver. This is a
    follow-up to the recent snafu with the ASMedia controller and its
    virtual port hidding port-multiplier devices. As ASMedia confirmed
    that there is no way to determine if these slow-to-probe virtual
    ports are actually representing the ports of a port-multiplier
    devices, this new parameter allow masking ports to significantly
    speed up probing during system boot, resulting in shorter boot times.
 
  - A fix for an incorrect handling of a port unlock in
    ata_scsi_dev_rescan().
 
  - Allow command duration limits to be detected for ACS-4 devices are
    there are such devices out in the field.
 -----BEGIN PGP SIGNATURE-----
 
 iHUEABYKAB0WIQSRPv8tYSvhwAzJdzjdoc3SxdoYdgUCZho+/AAKCRDdoc3SxdoY
 dhm2AP49p/aAmpTGKYQjXwiQWH4JMR/ey9rZgdw0jjGtMYkdRQEA+aCLtWFKnENr
 1E+lnTO+ef01BhJlk/FCK2VyOqCNxwM=
 =pDw+
 -----END PGP SIGNATURE-----

Merge tag 'ata-6.9-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/libata/linux

Pull ata fixes from Damien Le Moal:

 - Add the mask_port_map parameter to the ahci driver. This is a
   follow-up to the recent snafu with the ASMedia controller and its
   virtual port hidding port-multiplier devices. As ASMedia confirmed
   that there is no way to determine if these slow-to-probe virtual
   ports are actually representing the ports of a port-multiplier
   devices, this new parameter allow masking ports to significantly
   speed up probing during system boot, resulting in shorter boot times.

 - A fix for an incorrect handling of a port unlock in
   ata_scsi_dev_rescan().

 - Allow command duration limits to be detected for ACS-4 devices are
   there are such devices out in the field.

* tag 'ata-6.9-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/libata/linux:
  ata: libata-core: Allow command duration limits detection for ACS-4 drives
  ata: libata-scsi: Fix ata_scsi_dev_rescan() error path
  ata: ahci: Add mask_port_map module parameter
2024-04-13 10:27:58 -07:00
Linus Torvalds
76b0e9c429 zonefs fixes for 6.9-rc4
- Suppress a coccicheck warning using str_plural().
 -----BEGIN PGP SIGNATURE-----
 
 iHUEABYKAB0WIQSRPv8tYSvhwAzJdzjdoc3SxdoYdgUCZhnikwAKCRDdoc3SxdoY
 ditrAP9s/r36Dl4c8BZxpoUyCmf44ww6oTMxGzjwi/4gA8Ry2gEA766WGVsDyJf2
 G0qjl7+bVneaOG6Xayzce69OjvZGZws=
 =U6kP
 -----END PGP SIGNATURE-----

Merge tag 'zonefs-6.9-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/dlemoal/zonefs

Pull zonefs fix from Damien Le Moal:

 - Suppress a coccicheck warning using str_plural()

* tag 'zonefs-6.9-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/dlemoal/zonefs:
  zonefs: Use str_plural() to fix Coccinelle warning
2024-04-13 10:25:32 -07:00
Linus Torvalds
fa4022cb73 Four cifs.ko changesets, most also for stable
-----BEGIN PGP SIGNATURE-----
 
 iQGzBAABCgAdFiEE6fsu8pdIjtWE/DpLiiy9cAdyT1EFAmYaS/8ACgkQiiy9cAdy
 T1G3OAwAjRyhi2XHb0QtA0YpZN0FLFghoY/+hgAYaS20WokdqbdyPfZeievRn6Jq
 zgoRYmPF9ihEAZ/UnNIu5ficHoFFsNTxPodQKOTMKxtYg7RCWv7ZF87+/OafuG/k
 FbYue7LQ1CvloEBMsNg088uFQsCp4u8k/9IaHHf+wEDmti42JTqoLjg5YsgbcotI
 Qt8VtxTVL+FzPax/qQkXtBvHggxTfs7VWvwRkv21NsotQzBoPGxHxv3bUV9G54Ez
 56w6Qg+R4ZSwIGgrRGAIjTZ0E6lPuIBanmePJlFwFB0em7Ajl3K2BBJUXpTsxgEd
 aDgJKFZ+ZmDpp5GWlYRccmy/jm6uOiLdCxPye/EQk5KK/oUNpFUUQ95OJ1clH6Su
 pR8LLk1yPQH+e8eyx0Exq7H8l0lsy4pHzerVyUgdgd9E1IZ3+MIN261vTu2TTfGq
 CtML0WNcZFnQdYbMnkADAZ7zYgr32U0UiWr6WS+V0rugyguDrPIEiIneFl3rSZsq
 MQaOHyS5
 =AHB9
 -----END PGP SIGNATURE-----

Merge tag 'v6.9-rc3-SMB3-client-fixes' of git://git.samba.org/sfrench/cifs-2.6

Pull smb client fixes from Steve French:

 - fix for oops in cifs_get_fattr of deleted files

 - fix for the remote open counter going negative in some directory
   lease cases

 - fix for mkfifo to instantiate dentry to avoid possible crash

 - important fix to allow handling key rotation for mount and remount
   (ie cases that are becoming more common when password that was used
   for the mount will expire soon but will be replaced by new password)

* tag 'v6.9-rc3-SMB3-client-fixes' of git://git.samba.org/sfrench/cifs-2.6:
  smb3: fix broken reconnect when password changing on the server by allowing password rotation
  smb: client: instantiate when creating SFU files
  smb3: fix Open files on server counter going negative
  smb: client: fix NULL ptr deref in cifs_mark_open_handles_for_deleted_file()
2024-04-13 10:10:18 -07:00
Igor Pylypiv
c0297e7dd5 ata: libata-core: Allow command duration limits detection for ACS-4 drives
Even though the command duration limits (CDL) feature was first added
in ACS-5 (major version 12), there are some ACS-4 (major version 11)
drives that implement CDL as well.

IDENTIFY_DEVICE, SUPPORTED_CAPABILITIES, and CURRENT_SETTINGS log pages
are mandatory in the ACS-4 standard so it should be safe to read these
log pages on older drives implementing the ACS-4 standard.

Fixes: 62e4a60e0c ("scsi: ata: libata: Detect support for command duration limits")
Cc: stable@vger.kernel.org
Signed-off-by: Igor Pylypiv <ipylypiv@google.com>
Signed-off-by: Damien Le Moal <dlemoal@kernel.org>
2024-04-13 10:42:28 +09:00