Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
// SPDX-License-Identifier: GPL-2.0
|
|
|
|
/*
|
|
|
|
* Shared application/kernel submission and completion ring pairs, for
|
|
|
|
* supporting fast/efficient IO.
|
|
|
|
*
|
|
|
|
* A note on the read/write ordering memory barriers that are matched between
|
2019-04-24 21:54:16 +00:00
|
|
|
* the application and kernel side.
|
|
|
|
*
|
|
|
|
* After the application reads the CQ ring tail, it must use an
|
|
|
|
* appropriate smp_rmb() to pair with the smp_wmb() the kernel uses
|
|
|
|
* before writing the tail (using smp_load_acquire to read the tail will
|
|
|
|
* do). It also needs a smp_mb() before updating CQ head (ordering the
|
|
|
|
* entry load(s) with the head store), pairing with an implicit barrier
|
2021-05-16 21:58:11 +00:00
|
|
|
* through a control-dependency in io_get_cqe (smp_store_release to
|
2019-04-24 21:54:16 +00:00
|
|
|
* store head will do). Failure to do so could lead to reading invalid
|
|
|
|
* CQ entries.
|
|
|
|
*
|
|
|
|
* Likewise, the application must use an appropriate smp_wmb() before
|
|
|
|
* writing the SQ tail (ordering SQ entry stores with the tail store),
|
|
|
|
* which pairs with smp_load_acquire in io_get_sqring (smp_store_release
|
|
|
|
* to store the tail will do). And it needs a barrier ordering the SQ
|
|
|
|
* head load before writing new SQ entries (smp_load_acquire to read
|
|
|
|
* head will do).
|
|
|
|
*
|
|
|
|
* When using the SQ poll thread (IORING_SETUP_SQPOLL), the application
|
|
|
|
* needs to check the SQ flags for IORING_SQ_NEED_WAKEUP *after*
|
|
|
|
* updating the SQ tail; a full memory barrier smp_mb() is needed
|
|
|
|
* between.
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
*
|
|
|
|
* Also see the examples in the liburing library:
|
|
|
|
*
|
|
|
|
* git://git.kernel.dk/liburing
|
|
|
|
*
|
|
|
|
* io_uring also uses READ/WRITE_ONCE() for _any_ store or load that happens
|
|
|
|
* from data shared between the kernel and application. This is done both
|
|
|
|
* for ordering purposes, but also to ensure that once a value is loaded from
|
|
|
|
* data that the application could potentially modify, it remains stable.
|
|
|
|
*
|
|
|
|
* Copyright (C) 2018-2019 Jens Axboe
|
2019-01-11 16:43:02 +00:00
|
|
|
* Copyright (c) 2018-2019 Christoph Hellwig
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
*/
|
|
|
|
#include <linux/kernel.h>
|
|
|
|
#include <linux/init.h>
|
|
|
|
#include <linux/errno.h>
|
|
|
|
#include <linux/syscalls.h>
|
2020-02-27 17:15:42 +00:00
|
|
|
#include <net/compat.h>
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
#include <linux/refcount.h>
|
|
|
|
#include <linux/uio.h>
|
2020-01-18 17:22:41 +00:00
|
|
|
#include <linux/bits.h>
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
|
|
|
#include <linux/sched/signal.h>
|
|
|
|
#include <linux/fs.h>
|
|
|
|
#include <linux/file.h>
|
|
|
|
#include <linux/fdtable.h>
|
|
|
|
#include <linux/mm.h>
|
|
|
|
#include <linux/mman.h>
|
|
|
|
#include <linux/percpu.h>
|
|
|
|
#include <linux/slab.h>
|
io_uring: add support for pre-mapped user IO buffers
If we have fixed user buffers, we can map them into the kernel when we
setup the io_uring. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring instance, passing in
IORING_REGISTER_BUFFERS as the opcode. The argument must be a pointer to
an iovec array, and the nr_args should contain how many iovecs the
application wishes to map.
If successful, these buffers are now mapped into the kernel, eligible
for IO. To use these fixed buffers, the application must use the
IORING_OP_READ_FIXED and IORING_OP_WRITE_FIXED opcodes, and then
set sqe->index to the desired buffer index. sqe->addr..sqe->addr+seq->len
must point to somewhere inside the indexed buffer.
The application may register buffers throughout the lifetime of the
io_uring instance. It can call io_uring_register() with
IORING_UNREGISTER_BUFFERS as the opcode to unregister the current set of
buffers, and then register a new set. The application need not
unregister buffers explicitly before shutting down the io_uring
instance.
It's perfectly valid to setup a larger buffer, and then sometimes only
use parts of it for an IO. As long as the range is within the originally
mapped region, it will work just fine.
For now, buffers must not be file backed. If file backed buffers are
passed in, the registration will fail with -1/EOPNOTSUPP. This
restriction may be relaxed in the future.
RLIMIT_MEMLOCK is used to check how much memory we can pin. A somewhat
arbitrary 1G per buffer size is also imposed.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-09 16:16:05 +00:00
|
|
|
#include <linux/bvec.h>
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
#include <linux/net.h>
|
|
|
|
#include <net/sock.h>
|
|
|
|
#include <linux/anon_inodes.h>
|
|
|
|
#include <linux/sched/mm.h>
|
|
|
|
#include <linux/uaccess.h>
|
|
|
|
#include <linux/nospec.h>
|
2019-12-11 18:20:36 +00:00
|
|
|
#include <linux/fsnotify.h>
|
2019-12-26 05:03:45 +00:00
|
|
|
#include <linux/fadvise.h>
|
io_uring: add per-task callback handler
For poll requests, it's not uncommon to link a read (or write) after
the poll to execute immediately after the file is marked as ready.
Since the poll completion is called inside the waitqueue wake up handler,
we have to punt that linked request to async context. This slows down
the processing, and actually means it's faster to not use a link for this
use case.
We also run into problems if the completion_lock is contended, as we're
doing a different lock ordering than the issue side is. Hence we have
to do trylock for completion, and if that fails, go async. Poll removal
needs to go async as well, for the same reason.
eventfd notification needs special case as well, to avoid stack blowing
recursion or deadlocks.
These are all deficiencies that were inherited from the aio poll
implementation, but I think we can do better. When a poll completes,
simply queue it up in the task poll list. When the task completes the
list, we can run dependent links inline as well. This means we never
have to go async, and we can remove a bunch of code associated with
that, and optimizations to try and make that run faster. The diffstat
speaks for itself.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-17 16:52:41 +00:00
|
|
|
#include <linux/task_work.h>
|
2020-09-13 19:09:39 +00:00
|
|
|
#include <linux/io_uring.h>
|
2023-12-01 00:57:35 +00:00
|
|
|
#include <linux/io_uring/cmd.h>
|
2021-02-17 00:46:48 +00:00
|
|
|
#include <linux/audit.h>
|
lsm,io_uring: add LSM hooks to io_uring
A full expalantion of io_uring is beyond the scope of this commit
description, but in summary it is an asynchronous I/O mechanism
which allows for I/O requests and the resulting data to be queued
in memory mapped "rings" which are shared between the kernel and
userspace. Optionally, io_uring offers the ability for applications
to spawn kernel threads to dequeue I/O requests from the ring and
submit the requests in the kernel, helping to minimize the syscall
overhead. Rings are accessed in userspace by memory mapping a file
descriptor provided by the io_uring_setup(2), and can be shared
between applications as one might do with any open file descriptor.
Finally, process credentials can be registered with a given ring
and any process with access to that ring can submit I/O requests
using any of the registered credentials.
While the io_uring functionality is widely recognized as offering a
vastly improved, and high performing asynchronous I/O mechanism, its
ability to allow processes to submit I/O requests with credentials
other than its own presents a challenge to LSMs. When a process
creates a new io_uring ring the ring's credentials are inhertied
from the calling process; if this ring is shared with another
process operating with different credentials there is the potential
to bypass the LSMs security policy. Similarly, registering
credentials with a given ring allows any process with access to that
ring to submit I/O requests with those credentials.
In an effort to allow LSMs to apply security policy to io_uring I/O
operations, this patch adds two new LSM hooks. These hooks, in
conjunction with the LSM anonymous inode support previously
submitted, allow an LSM to apply access control policy to the
sharing of io_uring rings as well as any io_uring credential changes
requested by a process.
The new LSM hooks are described below:
* int security_uring_override_creds(cred)
Controls if the current task, executing an io_uring operation,
is allowed to override it's credentials with @cred. In cases
where the current task is a user application, the current
credentials will be those of the user application. In cases
where the current task is a kernel thread servicing io_uring
requests the current credentials will be those of the io_uring
ring (inherited from the process that created the ring).
* int security_uring_sqpoll(void)
Controls if the current task is allowed to create an io_uring
polling thread (IORING_SETUP_SQPOLL). Without a SQPOLL thread
in the kernel processes must submit I/O requests via
io_uring_enter(2) which allows us to compare any requested
credential changes against the application making the request.
With a SQPOLL thread, we can no longer compare requested
credential changes against the application making the request,
the comparison is made against the ring's credentials.
Signed-off-by: Paul Moore <paul@paul-moore.com>
2021-02-02 00:56:49 +00:00
|
|
|
#include <linux/security.h>
|
2023-02-16 08:09:38 +00:00
|
|
|
#include <asm/shmparam.h>
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
io_uring: add set of tracing events
To trace io_uring activity one can get an information from workqueue and
io trace events, but looks like some parts could be hard to identify via
this approach. Making what happens inside io_uring more transparent is
important to be able to reason about many aspects of it, hence introduce
the set of tracing events.
All such events could be roughly divided into two categories:
* those, that are helping to understand correctness (from both kernel
and an application point of view). E.g. a ring creation, file
registration, or waiting for available CQE. Proposed approach is to
get a pointer to an original structure of interest (ring context, or
request), and then find relevant events. io_uring_queue_async_work
also exposes a pointer to work_struct, to be able to track down
corresponding workqueue events.
* those, that provide performance related information. Mostly it's about
events that change the flow of requests, e.g. whether an async work
was queued, or delayed due to some dependencies. Another important
case is how io_uring optimizations (e.g. registered files) are
utilized.
Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-10-15 17:02:01 +00:00
|
|
|
#define CREATE_TRACE_POINTS
|
|
|
|
#include <trace/events/io_uring.h>
|
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
#include <uapi/linux/io_uring.h>
|
|
|
|
|
2019-10-24 13:25:42 +00:00
|
|
|
#include "io-wq.h"
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
2022-05-24 18:45:38 +00:00
|
|
|
#include "io_uring.h"
|
2022-05-26 02:31:09 +00:00
|
|
|
#include "opdef.h"
|
2022-05-25 14:56:52 +00:00
|
|
|
#include "refs.h"
|
2022-05-25 17:01:04 +00:00
|
|
|
#include "tctx.h"
|
2023-12-19 15:54:20 +00:00
|
|
|
#include "register.h"
|
2022-05-25 15:13:39 +00:00
|
|
|
#include "sqpoll.h"
|
2022-05-25 16:40:19 +00:00
|
|
|
#include "fdinfo.h"
|
2022-06-13 13:07:23 +00:00
|
|
|
#include "kbuf.h"
|
2022-06-13 13:12:45 +00:00
|
|
|
#include "rsrc.h"
|
2022-06-16 09:22:02 +00:00
|
|
|
#include "cancel.h"
|
2022-07-07 20:30:09 +00:00
|
|
|
#include "net.h"
|
2022-07-12 20:52:38 +00:00
|
|
|
#include "notif.h"
|
2023-07-10 22:14:37 +00:00
|
|
|
#include "waitid.h"
|
io_uring: add support for futex wake and wait
Add support for FUTEX_WAKE/WAIT primitives.
IORING_OP_FUTEX_WAKE is mix of FUTEX_WAKE and FUTEX_WAKE_BITSET, as
it does support passing in a bitset.
Similary, IORING_OP_FUTEX_WAIT is a mix of FUTEX_WAIT and
FUTEX_WAIT_BITSET.
For both of them, they are using the futex2 interface.
FUTEX_WAKE is straight forward, as those can always be done directly from
the io_uring submission without needing async handling. For FUTEX_WAIT,
things are a bit more complicated. If the futex isn't ready, then we
rely on a callback via futex_queue->wake() when someone wakes up the
futex. From that calback, we queue up task_work with the original task,
which will post a CQE and wake it, if necessary.
Cancelations are supported, both from the application point-of-view,
but also to be able to cancel pending waits if the ring exits before
all events have occurred. The return value of futex_unqueue() is used
to gate who wins the potential race between cancelation and futex
wakeups. Whomever gets a 'ret == 1' return from that claims ownership
of the io_uring futex request.
This is just the barebones wait/wake support. PI or REQUEUE support is
not added at this point, unclear if we might look into that later.
Likewise, explicit timeouts are not supported either. It is expected
that users that need timeouts would do so via the usual io_uring
mechanism to do that using linked timeouts.
The SQE format is as follows:
`addr` Address of futex
`fd` futex2(2) FUTEX2_* flags
`futex_flags` io_uring specific command flags. None valid now.
`addr2` Value of futex
`addr3` Mask to wake/wait
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2023-06-08 17:57:40 +00:00
|
|
|
#include "futex.h"
|
2023-06-08 16:38:36 +00:00
|
|
|
#include "napi.h"
|
2024-03-18 22:00:23 +00:00
|
|
|
#include "uring_cmd.h"
|
2024-06-06 18:25:01 +00:00
|
|
|
#include "msg_ring.h"
|
2024-03-27 20:59:09 +00:00
|
|
|
#include "memmap.h"
|
2022-05-24 16:56:14 +00:00
|
|
|
|
2022-05-25 14:57:27 +00:00
|
|
|
#include "timeout.h"
|
2022-05-26 02:31:09 +00:00
|
|
|
#include "poll.h"
|
2023-06-02 14:41:46 +00:00
|
|
|
#include "rw.h"
|
2022-07-07 20:16:20 +00:00
|
|
|
#include "alloc_cache.h"
|
2024-06-03 17:51:19 +00:00
|
|
|
#include "eventfd.h"
|
2022-05-24 17:46:43 +00:00
|
|
|
|
2019-09-14 21:23:45 +00:00
|
|
|
#define IORING_MAX_ENTRIES 32768
|
2019-10-04 18:10:03 +00:00
|
|
|
#define IORING_MAX_CQ_ENTRIES (2 * IORING_MAX_ENTRIES)
|
2019-10-26 13:20:21 +00:00
|
|
|
|
2021-09-15 11:03:38 +00:00
|
|
|
#define SQE_COMMON_FLAGS (IOSQE_FIXED_FILE | IOSQE_IO_LINK | \
|
|
|
|
IOSQE_IO_HARDLINK | IOSQE_ASYNC)
|
|
|
|
|
2021-11-10 15:49:34 +00:00
|
|
|
#define SQE_VALID_FLAGS (SQE_COMMON_FLAGS | IOSQE_BUFFER_SELECT | \
|
|
|
|
IOSQE_IO_DRAIN | IOSQE_CQE_SKIP_SUCCESS)
|
2021-09-15 11:03:38 +00:00
|
|
|
|
2021-06-17 17:14:04 +00:00
|
|
|
#define IO_REQ_CLEAN_FLAGS (REQ_F_BUFFER_SELECTED | REQ_F_NEED_CLEANUP | \
|
2022-06-02 05:57:02 +00:00
|
|
|
REQ_F_POLLED | REQ_F_INFLIGHT | REQ_F_CREDS | \
|
|
|
|
REQ_F_ASYNC_DATA)
|
2021-02-18 18:29:40 +00:00
|
|
|
|
2022-03-21 22:02:22 +00:00
|
|
|
#define IO_REQ_CLEAN_SLOW_FLAGS (REQ_F_REFCOUNT | REQ_F_LINK | REQ_F_HARDLINK |\
|
|
|
|
IO_REQ_CLEAN_FLAGS)
|
|
|
|
|
2021-06-14 01:36:22 +00:00
|
|
|
#define IO_TCTX_REFS_CACHE_NR (1U << 10)
|
|
|
|
|
2021-02-10 00:03:13 +00:00
|
|
|
#define IO_COMPL_BATCH 32
|
2021-02-10 00:03:17 +00:00
|
|
|
#define IO_REQ_ALLOC_BATCH 8
|
2021-02-10 00:03:10 +00:00
|
|
|
|
2020-07-13 20:37:14 +00:00
|
|
|
struct io_defer_entry {
|
|
|
|
struct list_head list;
|
|
|
|
struct io_kiocb *req;
|
2020-07-13 20:37:15 +00:00
|
|
|
u32 seq;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
};
|
|
|
|
|
2021-08-15 09:40:25 +00:00
|
|
|
/* requests with any of those set should undergo io_disarm_next() */
|
|
|
|
#define IO_DISARM_MASK (REQ_F_ARM_LTIMEOUT | REQ_F_LINK_TIMEOUT | REQ_F_FAIL)
|
2022-04-15 21:08:29 +00:00
|
|
|
#define IO_REQ_LINK_FLAGS (REQ_F_LINK | REQ_F_HARDLINK)
|
2021-08-15 09:40:25 +00:00
|
|
|
|
2024-01-17 00:57:29 +00:00
|
|
|
/*
|
|
|
|
* No waiters. It's larger than any valid value of the tw counter
|
|
|
|
* so that tests against ->cq_wait_nr would fail and skip wake_up().
|
|
|
|
*/
|
|
|
|
#define IO_CQ_WAKE_INIT (-1U)
|
|
|
|
/* Forced wake up if there is a waiter regardless of ->cq_wait_nr */
|
|
|
|
#define IO_CQ_WAKE_FORCE (IO_CQ_WAKE_INIT >> 1)
|
|
|
|
|
2022-06-20 00:25:52 +00:00
|
|
|
static bool io_uring_try_cancel_requests(struct io_ring_ctx *ctx,
|
2021-02-04 13:51:56 +00:00
|
|
|
struct task_struct *task,
|
2021-05-16 21:58:04 +00:00
|
|
|
bool cancel_all);
|
2020-12-30 21:34:15 +00:00
|
|
|
|
2022-04-15 21:08:26 +00:00
|
|
|
static void io_queue_sqe(struct io_kiocb *req);
|
2019-04-07 03:51:27 +00:00
|
|
|
|
2023-01-18 15:56:30 +00:00
|
|
|
struct kmem_cache *req_cachep;
|
2024-04-01 21:16:19 +00:00
|
|
|
static struct workqueue_struct *iou_wq __ro_after_init;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
2023-08-21 21:15:52 +00:00
|
|
|
static int __read_mostly sysctl_io_uring_disabled;
|
|
|
|
static int __read_mostly sysctl_io_uring_group = -1;
|
|
|
|
|
|
|
|
#ifdef CONFIG_SYSCTL
|
|
|
|
static struct ctl_table kernel_io_uring_disabled_table[] = {
|
|
|
|
{
|
|
|
|
.procname = "io_uring_disabled",
|
|
|
|
.data = &sysctl_io_uring_disabled,
|
|
|
|
.maxlen = sizeof(sysctl_io_uring_disabled),
|
|
|
|
.mode = 0644,
|
|
|
|
.proc_handler = proc_dointvec_minmax,
|
|
|
|
.extra1 = SYSCTL_ZERO,
|
|
|
|
.extra2 = SYSCTL_TWO,
|
|
|
|
},
|
|
|
|
{
|
|
|
|
.procname = "io_uring_group",
|
|
|
|
.data = &sysctl_io_uring_group,
|
|
|
|
.maxlen = sizeof(gid_t),
|
|
|
|
.mode = 0644,
|
|
|
|
.proc_handler = proc_dointvec,
|
|
|
|
},
|
|
|
|
};
|
|
|
|
#endif
|
|
|
|
|
2022-06-17 08:48:01 +00:00
|
|
|
static inline unsigned int __io_cqring_events(struct io_ring_ctx *ctx)
|
|
|
|
{
|
|
|
|
return ctx->cached_cq_tail - READ_ONCE(ctx->rings->cq.head);
|
|
|
|
}
|
|
|
|
|
io_uring: calculate CQEs from the user visible value
io_cqring_wait (and it's wake function io_has_work) used cached_cq_tail in
order to calculate the number of CQEs. cached_cq_tail is set strictly
before the user visible rings->cq.tail
However as far as userspace is concerned, if io_uring_enter(2) is called
with a minimum number of events, they will verify by checking
rings->cq.tail.
It is therefore possible for io_uring_enter(2) to return early with fewer
events visible to the user.
Instead make the wait functions read from the user visible value, so there
will be no discrepency.
This is triggered eventually by the following reproducer:
struct io_uring_sqe *sqe;
struct io_uring_cqe *cqe;
unsigned int cqe_ready;
struct io_uring ring;
int ret, i;
ret = io_uring_queue_init(N, &ring, 0);
assert(!ret);
while(true) {
for (i = 0; i < N; i++) {
sqe = io_uring_get_sqe(&ring);
io_uring_prep_nop(sqe);
sqe->flags |= IOSQE_ASYNC;
}
ret = io_uring_submit(&ring);
assert(ret == N);
do {
ret = io_uring_wait_cqes(&ring, &cqe, N, NULL, NULL);
} while(ret == -EINTR);
cqe_ready = io_uring_cq_ready(&ring);
assert(!ret);
assert(cqe_ready == N);
io_uring_cq_advance(&ring, N);
}
Fixes: ad3eb2c89fb2 ("io_uring: split overflow state into SQ and CQ side")
Signed-off-by: Dylan Yudaken <dylany@meta.com>
Link: https://lore.kernel.org/r/20221108153016.1854297-1-dylany@meta.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-11-08 15:30:16 +00:00
|
|
|
static inline unsigned int __io_cqring_events_user(struct io_ring_ctx *ctx)
|
|
|
|
{
|
|
|
|
return READ_ONCE(ctx->rings->cq.tail) - READ_ONCE(ctx->rings->cq.head);
|
|
|
|
}
|
|
|
|
|
2022-06-02 05:57:02 +00:00
|
|
|
static bool io_match_linked(struct io_kiocb *head)
|
|
|
|
{
|
|
|
|
struct io_kiocb *req;
|
|
|
|
|
|
|
|
io_for_each_link(req, head) {
|
|
|
|
if (req->flags & REQ_F_INFLIGHT)
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
return false;
|
2021-11-26 14:38:15 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* As io_match_task() but protected against racing with linked timeouts.
|
|
|
|
* User must not hold timeout_lock.
|
|
|
|
*/
|
2022-05-26 02:31:09 +00:00
|
|
|
bool io_match_task_safe(struct io_kiocb *head, struct task_struct *task,
|
|
|
|
bool cancel_all)
|
2021-11-26 14:38:15 +00:00
|
|
|
{
|
2022-06-02 05:57:02 +00:00
|
|
|
bool matched;
|
|
|
|
|
2021-11-26 14:38:15 +00:00
|
|
|
if (task && head->task != task)
|
|
|
|
return false;
|
2022-06-02 05:57:02 +00:00
|
|
|
if (cancel_all)
|
|
|
|
return true;
|
|
|
|
|
|
|
|
if (head->flags & REQ_F_LINK_TIMEOUT) {
|
|
|
|
struct io_ring_ctx *ctx = head->ctx;
|
|
|
|
|
|
|
|
/* protect against races with linked timeouts */
|
|
|
|
spin_lock_irq(&ctx->timeout_lock);
|
|
|
|
matched = io_match_linked(head);
|
|
|
|
spin_unlock_irq(&ctx->timeout_lock);
|
|
|
|
} else {
|
|
|
|
matched = io_match_linked(head);
|
|
|
|
}
|
|
|
|
return matched;
|
2021-11-26 14:38:15 +00:00
|
|
|
}
|
|
|
|
|
2021-08-27 09:46:09 +00:00
|
|
|
static inline void req_fail_link_node(struct io_kiocb *req, int res)
|
|
|
|
{
|
|
|
|
req_set_fail(req);
|
2022-05-24 21:21:00 +00:00
|
|
|
io_req_set_res(req, res, 0);
|
2021-08-27 09:46:09 +00:00
|
|
|
}
|
|
|
|
|
2022-04-12 14:09:48 +00:00
|
|
|
static inline void io_req_add_to_cache(struct io_kiocb *req, struct io_ring_ctx *ctx)
|
|
|
|
{
|
|
|
|
wq_stack_add_head(&req->comp_list, &ctx->submit_state.free_list);
|
2021-08-27 09:46:09 +00:00
|
|
|
}
|
|
|
|
|
2021-10-04 19:02:54 +00:00
|
|
|
static __cold void io_ring_ctx_ref_free(struct percpu_ref *ref)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx = container_of(ref, struct io_ring_ctx, refs);
|
|
|
|
|
2020-05-14 23:18:39 +00:00
|
|
|
complete(&ctx->ref_comp);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
}
|
|
|
|
|
2021-10-04 19:02:54 +00:00
|
|
|
static __cold void io_fallback_req_func(struct work_struct *work)
|
2021-08-09 19:18:07 +00:00
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx = container_of(work, struct io_ring_ctx,
|
|
|
|
fallback_work.work);
|
|
|
|
struct llist_node *node = llist_del_all(&ctx->fallback_llist);
|
|
|
|
struct io_kiocb *req, *tmp;
|
2024-03-18 22:00:30 +00:00
|
|
|
struct io_tw_state ts = {};
|
2021-08-09 19:18:07 +00:00
|
|
|
|
2023-12-03 15:37:53 +00:00
|
|
|
percpu_ref_get(&ctx->refs);
|
2023-01-16 16:48:59 +00:00
|
|
|
mutex_lock(&ctx->uring_lock);
|
2022-06-25 10:52:59 +00:00
|
|
|
llist_for_each_entry_safe(req, tmp, node, io_task_work.node)
|
2023-03-27 15:38:15 +00:00
|
|
|
req->io_task_work.func(req, &ts);
|
2023-01-16 16:48:59 +00:00
|
|
|
io_submit_flush_completions(ctx);
|
|
|
|
mutex_unlock(&ctx->uring_lock);
|
2023-12-03 15:37:53 +00:00
|
|
|
percpu_ref_put(&ctx->refs);
|
2021-08-09 19:18:07 +00:00
|
|
|
}
|
|
|
|
|
2022-06-16 09:22:10 +00:00
|
|
|
static int io_alloc_hash_table(struct io_hash_table *table, unsigned bits)
|
|
|
|
{
|
|
|
|
unsigned hash_buckets = 1U << bits;
|
|
|
|
size_t hash_size = hash_buckets * sizeof(table->hbs[0]);
|
|
|
|
|
|
|
|
table->hbs = kmalloc(hash_size, GFP_KERNEL);
|
|
|
|
if (!table->hbs)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
table->hash_bits = bits;
|
|
|
|
init_hash_table(table, hash_buckets);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2021-10-04 19:02:54 +00:00
|
|
|
static __cold struct io_ring_ctx *io_ring_ctx_alloc(struct io_uring_params *p)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx;
|
2022-05-01 16:52:44 +00:00
|
|
|
int hash_bits;
|
2024-03-20 21:19:44 +00:00
|
|
|
bool ret;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
|
|
|
ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
|
|
|
|
if (!ctx)
|
|
|
|
return NULL;
|
|
|
|
|
2022-05-01 16:52:44 +00:00
|
|
|
xa_init(&ctx->io_bl_xa);
|
|
|
|
|
2019-12-05 02:56:40 +00:00
|
|
|
/*
|
|
|
|
* Use 5 bits less than the max cq entries, that should give us around
|
2022-06-16 09:22:05 +00:00
|
|
|
* 32 entries per hash list if totally full and uniformly spread, but
|
|
|
|
* don't keep too many buckets to not overconsume memory.
|
2019-12-05 02:56:40 +00:00
|
|
|
*/
|
2022-06-16 09:22:05 +00:00
|
|
|
hash_bits = ilog2(p->cq_entries) - 5;
|
|
|
|
hash_bits = clamp(hash_bits, 1, 8);
|
2022-06-16 09:22:10 +00:00
|
|
|
if (io_alloc_hash_table(&ctx->cancel_table, hash_bits))
|
2019-12-05 02:56:40 +00:00
|
|
|
goto err;
|
2022-06-16 09:22:12 +00:00
|
|
|
if (io_alloc_hash_table(&ctx->cancel_table_locked, hash_bits))
|
|
|
|
goto err;
|
2019-05-07 17:01:48 +00:00
|
|
|
if (percpu_ref_init(&ctx->refs, io_ring_ctx_ref_free,
|
2022-07-15 17:45:01 +00:00
|
|
|
0, GFP_KERNEL))
|
2019-11-08 01:27:42 +00:00
|
|
|
goto err;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
|
|
|
ctx->flags = p->flags;
|
2024-01-17 00:57:29 +00:00
|
|
|
atomic_set(&ctx->cq_wait_nr, IO_CQ_WAKE_INIT);
|
2020-09-03 18:12:41 +00:00
|
|
|
init_waitqueue_head(&ctx->sqo_sq_wait);
|
2020-09-14 17:16:23 +00:00
|
|
|
INIT_LIST_HEAD(&ctx->sqd_list);
|
io_uring: add support for backlogged CQ ring
Currently we drop completion events, if the CQ ring is full. That's fine
for requests with bounded completion times, but it may make it harder or
impossible to use io_uring with networked IO where request completion
times are generally unbounded. Or with POLL, for example, which is also
unbounded.
After this patch, we never overflow the ring, we simply store requests
in a backlog for later flushing. This flushing is done automatically by
the kernel. To prevent the backlog from growing indefinitely, if the
backlog is non-empty, we apply back pressure on IO submissions. Any
attempt to submit new IO with a non-empty backlog will get an -EBUSY
return from the kernel. This is a signal to the application that it has
backlogged CQ events, and that it must reap those before being allowed
to submit more IO.
Note that if we do return -EBUSY, we will have filled whatever
backlogged events into the CQ ring first, if there's room. This means
the application can safely reap events WITHOUT entering the kernel and
waiting for them, they are already available in the CQ ring.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-11-06 18:31:17 +00:00
|
|
|
INIT_LIST_HEAD(&ctx->cq_overflow_list);
|
2022-03-09 00:46:52 +00:00
|
|
|
INIT_LIST_HEAD(&ctx->io_buffers_cache);
|
2024-03-20 21:19:44 +00:00
|
|
|
ret = io_alloc_cache_init(&ctx->rsrc_node_cache, IO_NODE_ALLOC_CACHE_MAX,
|
2023-04-04 12:39:57 +00:00
|
|
|
sizeof(struct io_rsrc_node));
|
2024-03-21 13:53:07 +00:00
|
|
|
ret |= io_alloc_cache_init(&ctx->apoll_cache, IO_POLL_ALLOC_CACHE_MAX,
|
2023-04-04 12:39:57 +00:00
|
|
|
sizeof(struct async_poll));
|
2024-03-20 21:19:44 +00:00
|
|
|
ret |= io_alloc_cache_init(&ctx->netmsg_cache, IO_ALLOC_CACHE_MAX,
|
2023-04-04 12:39:57 +00:00
|
|
|
sizeof(struct io_async_msghdr));
|
2024-03-20 21:19:44 +00:00
|
|
|
ret |= io_alloc_cache_init(&ctx->rw_cache, IO_ALLOC_CACHE_MAX,
|
2024-03-18 22:13:01 +00:00
|
|
|
sizeof(struct io_async_rw));
|
2024-03-20 21:19:44 +00:00
|
|
|
ret |= io_alloc_cache_init(&ctx->uring_cache, IO_ALLOC_CACHE_MAX,
|
2024-03-19 02:41:58 +00:00
|
|
|
sizeof(struct uring_cache));
|
2024-06-06 18:25:01 +00:00
|
|
|
spin_lock_init(&ctx->msg_lock);
|
|
|
|
ret |= io_alloc_cache_init(&ctx->msg_cache, IO_ALLOC_CACHE_MAX,
|
|
|
|
sizeof(struct io_kiocb));
|
2024-03-20 21:19:44 +00:00
|
|
|
ret |= io_futex_cache_init(ctx);
|
|
|
|
if (ret)
|
|
|
|
goto err;
|
2020-05-14 23:18:39 +00:00
|
|
|
init_completion(&ctx->ref_comp);
|
2021-03-08 14:16:16 +00:00
|
|
|
xa_init_flags(&ctx->personalities, XA_FLAGS_ALLOC1);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
mutex_init(&ctx->uring_lock);
|
2021-06-14 22:37:28 +00:00
|
|
|
init_waitqueue_head(&ctx->cq_wait);
|
2023-01-09 14:46:08 +00:00
|
|
|
init_waitqueue_head(&ctx->poll_wq);
|
2023-04-13 14:28:08 +00:00
|
|
|
init_waitqueue_head(&ctx->rsrc_quiesce_wq);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
spin_lock_init(&ctx->completion_lock);
|
2021-08-10 21:11:51 +00:00
|
|
|
spin_lock_init(&ctx->timeout_lock);
|
2021-09-24 20:59:49 +00:00
|
|
|
INIT_WQ_LIST(&ctx->iopoll_list);
|
2022-03-09 00:46:52 +00:00
|
|
|
INIT_LIST_HEAD(&ctx->io_buffers_comp);
|
2019-04-07 03:51:27 +00:00
|
|
|
INIT_LIST_HEAD(&ctx->defer_list);
|
2019-09-17 18:26:57 +00:00
|
|
|
INIT_LIST_HEAD(&ctx->timeout_list);
|
2021-08-29 01:54:38 +00:00
|
|
|
INIT_LIST_HEAD(&ctx->ltimeout_list);
|
2021-01-15 17:37:46 +00:00
|
|
|
INIT_LIST_HEAD(&ctx->rsrc_ref_list);
|
2022-08-30 12:50:10 +00:00
|
|
|
init_llist_head(&ctx->work_llist);
|
2021-03-06 11:02:12 +00:00
|
|
|
INIT_LIST_HEAD(&ctx->tctx_list);
|
2021-09-24 20:59:47 +00:00
|
|
|
ctx->submit_state.free_list.next = NULL;
|
2023-07-10 22:14:37 +00:00
|
|
|
INIT_HLIST_HEAD(&ctx->waitid_list);
|
io_uring: add support for futex wake and wait
Add support for FUTEX_WAKE/WAIT primitives.
IORING_OP_FUTEX_WAKE is mix of FUTEX_WAKE and FUTEX_WAKE_BITSET, as
it does support passing in a bitset.
Similary, IORING_OP_FUTEX_WAIT is a mix of FUTEX_WAIT and
FUTEX_WAIT_BITSET.
For both of them, they are using the futex2 interface.
FUTEX_WAKE is straight forward, as those can always be done directly from
the io_uring submission without needing async handling. For FUTEX_WAIT,
things are a bit more complicated. If the futex isn't ready, then we
rely on a callback via futex_queue->wake() when someone wakes up the
futex. From that calback, we queue up task_work with the original task,
which will post a CQE and wake it, if necessary.
Cancelations are supported, both from the application point-of-view,
but also to be able to cancel pending waits if the ring exits before
all events have occurred. The return value of futex_unqueue() is used
to gate who wins the potential race between cancelation and futex
wakeups. Whomever gets a 'ret == 1' return from that claims ownership
of the io_uring futex request.
This is just the barebones wait/wake support. PI or REQUEUE support is
not added at this point, unclear if we might look into that later.
Likewise, explicit timeouts are not supported either. It is expected
that users that need timeouts would do so via the usual io_uring
mechanism to do that using linked timeouts.
The SQE format is as follows:
`addr` Address of futex
`fd` futex2(2) FUTEX2_* flags
`futex_flags` io_uring specific command flags. None valid now.
`addr2` Value of futex
`addr3` Mask to wake/wait
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2023-06-08 17:57:40 +00:00
|
|
|
#ifdef CONFIG_FUTEX
|
|
|
|
INIT_HLIST_HEAD(&ctx->futex_list);
|
|
|
|
#endif
|
2021-06-30 20:54:03 +00:00
|
|
|
INIT_DELAYED_WORK(&ctx->fallback_work, io_fallback_req_func);
|
2021-09-24 20:59:44 +00:00
|
|
|
INIT_WQ_LIST(&ctx->submit_state.compl_reqs);
|
2023-09-28 12:43:25 +00:00
|
|
|
INIT_HLIST_HEAD(&ctx->cancelable_uring_cmd);
|
2023-06-08 16:38:36 +00:00
|
|
|
io_napi_init(ctx);
|
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
return ctx;
|
2019-11-08 01:27:42 +00:00
|
|
|
err:
|
2024-03-20 21:19:44 +00:00
|
|
|
io_alloc_cache_free(&ctx->rsrc_node_cache, kfree);
|
|
|
|
io_alloc_cache_free(&ctx->apoll_cache, kfree);
|
|
|
|
io_alloc_cache_free(&ctx->netmsg_cache, io_netmsg_cache_free);
|
|
|
|
io_alloc_cache_free(&ctx->rw_cache, io_rw_cache_free);
|
|
|
|
io_alloc_cache_free(&ctx->uring_cache, kfree);
|
2024-06-06 18:25:01 +00:00
|
|
|
io_alloc_cache_free(&ctx->msg_cache, io_msg_cache_free);
|
2024-03-20 21:19:44 +00:00
|
|
|
io_futex_cache_free(ctx);
|
2022-06-16 09:22:10 +00:00
|
|
|
kfree(ctx->cancel_table.hbs);
|
2022-06-16 09:22:12 +00:00
|
|
|
kfree(ctx->cancel_table_locked.hbs);
|
2022-05-01 16:52:44 +00:00
|
|
|
xa_destroy(&ctx->io_bl_xa);
|
2019-11-08 01:27:42 +00:00
|
|
|
kfree(ctx);
|
|
|
|
return NULL;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
}
|
|
|
|
|
2021-05-16 21:58:10 +00:00
|
|
|
static void io_account_cq_overflow(struct io_ring_ctx *ctx)
|
|
|
|
{
|
|
|
|
struct io_rings *r = ctx->rings;
|
|
|
|
|
|
|
|
WRITE_ONCE(r->cq_overflow, READ_ONCE(r->cq_overflow) + 1);
|
|
|
|
ctx->cq_extra--;
|
|
|
|
}
|
|
|
|
|
2020-07-13 20:37:15 +00:00
|
|
|
static bool req_need_defer(struct io_kiocb *req, u32 seq)
|
2019-10-11 03:42:58 +00:00
|
|
|
{
|
2020-07-09 15:43:27 +00:00
|
|
|
if (unlikely(req->flags & REQ_F_IO_DRAIN)) {
|
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
2019-11-08 15:09:12 +00:00
|
|
|
|
2021-05-16 21:58:10 +00:00
|
|
|
return seq + READ_ONCE(ctx->cq_extra) != ctx->cached_cq_tail;
|
2020-07-09 15:43:27 +00:00
|
|
|
}
|
2019-04-07 03:51:27 +00:00
|
|
|
|
2019-11-13 10:06:25 +00:00
|
|
|
return false;
|
2019-04-07 03:51:27 +00:00
|
|
|
}
|
|
|
|
|
2023-06-23 11:23:24 +00:00
|
|
|
static void io_clean_op(struct io_kiocb *req)
|
|
|
|
{
|
|
|
|
if (req->flags & REQ_F_BUFFER_SELECTED) {
|
|
|
|
spin_lock(&req->ctx->completion_lock);
|
2024-04-07 13:27:59 +00:00
|
|
|
io_kbuf_drop(req);
|
2023-06-23 11:23:24 +00:00
|
|
|
spin_unlock(&req->ctx->completion_lock);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (req->flags & REQ_F_NEED_CLEANUP) {
|
|
|
|
const struct io_cold_def *def = &io_cold_defs[req->opcode];
|
|
|
|
|
|
|
|
if (def->cleanup)
|
|
|
|
def->cleanup(req);
|
|
|
|
}
|
|
|
|
if ((req->flags & REQ_F_POLLED) && req->apoll) {
|
|
|
|
kfree(req->apoll->double_poll);
|
|
|
|
kfree(req->apoll);
|
|
|
|
req->apoll = NULL;
|
|
|
|
}
|
|
|
|
if (req->flags & REQ_F_INFLIGHT) {
|
|
|
|
struct io_uring_task *tctx = req->task->io_uring;
|
|
|
|
|
|
|
|
atomic_dec(&tctx->inflight_tracked);
|
|
|
|
}
|
|
|
|
if (req->flags & REQ_F_CREDS)
|
|
|
|
put_cred(req->creds);
|
|
|
|
if (req->flags & REQ_F_ASYNC_DATA) {
|
|
|
|
kfree(req->async_data);
|
|
|
|
req->async_data = NULL;
|
|
|
|
}
|
|
|
|
req->flags &= ~IO_REQ_CLEAN_FLAGS;
|
|
|
|
}
|
|
|
|
|
2022-06-02 05:57:02 +00:00
|
|
|
static inline void io_req_track_inflight(struct io_kiocb *req)
|
|
|
|
{
|
|
|
|
if (!(req->flags & REQ_F_INFLIGHT)) {
|
|
|
|
req->flags |= REQ_F_INFLIGHT;
|
2022-06-23 17:06:43 +00:00
|
|
|
atomic_inc(&req->task->io_uring->inflight_tracked);
|
2022-06-02 05:57:02 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-08-11 18:28:31 +00:00
|
|
|
static struct io_kiocb *__io_prep_linked_timeout(struct io_kiocb *req)
|
|
|
|
{
|
2021-08-15 09:40:26 +00:00
|
|
|
if (WARN_ON_ONCE(!req->link))
|
|
|
|
return NULL;
|
|
|
|
|
2021-08-15 09:40:24 +00:00
|
|
|
req->flags &= ~REQ_F_ARM_LTIMEOUT;
|
|
|
|
req->flags |= REQ_F_LINK_TIMEOUT;
|
2021-08-11 18:28:31 +00:00
|
|
|
|
|
|
|
/* linked timeouts should have two refs once prep'ed */
|
2021-08-15 09:40:18 +00:00
|
|
|
io_req_set_refcount(req);
|
2021-08-15 09:40:24 +00:00
|
|
|
__io_req_set_refcount(req->link, 2);
|
|
|
|
return req->link;
|
2021-08-11 18:28:31 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline struct io_kiocb *io_prep_linked_timeout(struct io_kiocb *req)
|
|
|
|
{
|
2021-08-15 09:40:24 +00:00
|
|
|
if (likely(!(req->flags & REQ_F_ARM_LTIMEOUT)))
|
2021-08-11 18:28:31 +00:00
|
|
|
return NULL;
|
|
|
|
return __io_prep_linked_timeout(req);
|
|
|
|
}
|
|
|
|
|
2022-04-15 21:08:25 +00:00
|
|
|
static noinline void __io_arm_ltimeout(struct io_kiocb *req)
|
|
|
|
{
|
|
|
|
io_queue_linked_timeout(__io_prep_linked_timeout(req));
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void io_arm_ltimeout(struct io_kiocb *req)
|
|
|
|
{
|
|
|
|
if (unlikely(req->flags & REQ_F_ARM_LTIMEOUT))
|
|
|
|
__io_arm_ltimeout(req);
|
|
|
|
}
|
|
|
|
|
2020-10-15 14:46:24 +00:00
|
|
|
static void io_prep_async_work(struct io_kiocb *req)
|
|
|
|
{
|
2023-01-12 14:44:10 +00:00
|
|
|
const struct io_issue_def *def = &io_issue_defs[req->opcode];
|
2020-10-15 14:46:24 +00:00
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
|
|
|
|
2021-06-17 17:14:02 +00:00
|
|
|
if (!(req->flags & REQ_F_CREDS)) {
|
|
|
|
req->flags |= REQ_F_CREDS;
|
2021-06-17 17:14:01 +00:00
|
|
|
req->creds = get_current_cred();
|
2021-06-17 17:14:02 +00:00
|
|
|
}
|
2021-03-06 16:22:27 +00:00
|
|
|
|
2021-03-22 01:58:29 +00:00
|
|
|
req->work.list.next = NULL;
|
2024-06-13 19:28:27 +00:00
|
|
|
atomic_set(&req->work.flags, 0);
|
2020-10-22 15:47:16 +00:00
|
|
|
if (req->flags & REQ_F_FORCE_ASYNC)
|
2024-06-13 19:28:27 +00:00
|
|
|
atomic_or(IO_WQ_WORK_CONCURRENT, &req->work.flags);
|
2020-10-22 15:47:16 +00:00
|
|
|
|
2023-06-20 11:32:31 +00:00
|
|
|
if (req->file && !(req->flags & REQ_F_FIXED_FILE))
|
2023-06-20 11:32:32 +00:00
|
|
|
req->flags |= io_file_get_flags(req->file);
|
2022-07-21 15:06:47 +00:00
|
|
|
|
2023-04-11 11:06:01 +00:00
|
|
|
if (req->file && (req->flags & REQ_F_ISREG)) {
|
2023-03-07 16:47:20 +00:00
|
|
|
bool should_hash = def->hash_reg_file;
|
|
|
|
|
|
|
|
/* don't serialize this request if the fs doesn't need it */
|
|
|
|
if (should_hash && (req->file->f_flags & O_DIRECT) &&
|
2024-03-28 12:27:24 +00:00
|
|
|
(req->file->f_op->fop_flags & FOP_DIO_PARALLEL_WRITE))
|
2023-03-07 16:47:20 +00:00
|
|
|
should_hash = false;
|
|
|
|
if (should_hash || (ctx->flags & IORING_SETUP_IOPOLL))
|
2020-10-15 14:46:24 +00:00
|
|
|
io_wq_hash_work(&req->work, file_inode(req->file));
|
2021-04-01 14:38:34 +00:00
|
|
|
} else if (!req->file || !S_ISBLK(file_inode(req->file)->i_mode)) {
|
2020-10-15 14:46:24 +00:00
|
|
|
if (def->unbound_nonreg_file)
|
2024-06-13 19:28:27 +00:00
|
|
|
atomic_or(IO_WQ_WORK_UNBOUND, &req->work.flags);
|
2020-10-15 14:46:24 +00:00
|
|
|
}
|
2019-10-24 13:25:42 +00:00
|
|
|
}
|
2020-01-27 23:34:48 +00:00
|
|
|
|
2020-06-29 16:18:43 +00:00
|
|
|
static void io_prep_async_link(struct io_kiocb *req)
|
2019-10-24 13:25:42 +00:00
|
|
|
{
|
2020-06-29 16:18:43 +00:00
|
|
|
struct io_kiocb *cur;
|
2019-09-10 15:15:04 +00:00
|
|
|
|
2021-07-26 13:14:31 +00:00
|
|
|
if (req->flags & REQ_F_LINK_TIMEOUT) {
|
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
|
|
|
|
2021-11-23 01:45:35 +00:00
|
|
|
spin_lock_irq(&ctx->timeout_lock);
|
2021-07-26 13:14:31 +00:00
|
|
|
io_for_each_link(cur, req)
|
|
|
|
io_prep_async_work(cur);
|
2021-11-23 01:45:35 +00:00
|
|
|
spin_unlock_irq(&ctx->timeout_lock);
|
2021-07-26 13:14:31 +00:00
|
|
|
} else {
|
|
|
|
io_for_each_link(cur, req)
|
|
|
|
io_prep_async_work(cur);
|
|
|
|
}
|
2019-10-24 13:25:42 +00:00
|
|
|
}
|
|
|
|
|
2024-03-18 22:00:28 +00:00
|
|
|
static void io_queue_iowq(struct io_kiocb *req)
|
2019-10-24 13:25:42 +00:00
|
|
|
{
|
2020-06-29 16:18:43 +00:00
|
|
|
struct io_kiocb *link = io_prep_linked_timeout(req);
|
2021-02-16 19:56:50 +00:00
|
|
|
struct io_uring_task *tctx = req->task->io_uring;
|
2019-10-24 13:25:42 +00:00
|
|
|
|
2021-02-16 21:15:30 +00:00
|
|
|
BUG_ON(!tctx);
|
|
|
|
BUG_ON(!tctx->io_wq);
|
2019-10-24 13:25:42 +00:00
|
|
|
|
2020-06-29 16:18:43 +00:00
|
|
|
/* init ->work of the whole link before punting */
|
|
|
|
io_prep_async_link(req);
|
2021-07-23 17:53:54 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Not expected to happen, but if we do have a bug where this _can_
|
|
|
|
* happen, catch it here and ensure the request is marked as
|
|
|
|
* canceled. That will make io-wq go through the usual work cancel
|
|
|
|
* procedure rather than attempt to run this request (or create a new
|
|
|
|
* worker for it).
|
|
|
|
*/
|
|
|
|
if (WARN_ON_ONCE(!same_thread_group(req->task, current)))
|
2024-06-13 19:28:27 +00:00
|
|
|
atomic_or(IO_WQ_WORK_CANCEL, &req->work.flags);
|
2021-07-23 17:53:54 +00:00
|
|
|
|
2022-06-16 12:57:20 +00:00
|
|
|
trace_io_uring_queue_async_work(req, io_wq_is_hashed(&req->work));
|
2021-03-01 18:20:47 +00:00
|
|
|
io_wq_enqueue(tctx->io_wq, &req->work);
|
2020-08-10 15:55:22 +00:00
|
|
|
if (link)
|
|
|
|
io_queue_linked_timeout(link);
|
2020-06-29 16:18:43 +00:00
|
|
|
}
|
|
|
|
|
2021-10-04 19:02:54 +00:00
|
|
|
static __cold void io_queue_deferred(struct io_ring_ctx *ctx)
|
2019-04-07 03:51:27 +00:00
|
|
|
{
|
2021-06-14 22:37:31 +00:00
|
|
|
while (!list_empty(&ctx->defer_list)) {
|
2020-07-13 20:37:14 +00:00
|
|
|
struct io_defer_entry *de = list_first_entry(&ctx->defer_list,
|
|
|
|
struct io_defer_entry, list);
|
2019-04-07 03:51:27 +00:00
|
|
|
|
2020-07-13 20:37:15 +00:00
|
|
|
if (req_need_defer(de->req, de->seq))
|
2020-05-26 17:34:05 +00:00
|
|
|
break;
|
2020-07-13 20:37:14 +00:00
|
|
|
list_del_init(&de->list);
|
2021-01-26 23:35:10 +00:00
|
|
|
io_req_task_queue(de->req);
|
2020-07-13 20:37:14 +00:00
|
|
|
kfree(de);
|
2021-06-14 22:37:31 +00:00
|
|
|
}
|
2020-05-26 17:34:05 +00:00
|
|
|
}
|
|
|
|
|
2022-06-19 11:26:06 +00:00
|
|
|
void __io_commit_cqring_flush(struct io_ring_ctx *ctx)
|
|
|
|
{
|
2023-01-09 14:46:09 +00:00
|
|
|
if (ctx->poll_activated)
|
|
|
|
io_poll_wq_wake(ctx);
|
2022-12-02 17:47:24 +00:00
|
|
|
if (ctx->off_timeout_used)
|
|
|
|
io_flush_timeouts(ctx);
|
|
|
|
if (ctx->drain_active) {
|
2022-06-19 11:26:06 +00:00
|
|
|
spin_lock(&ctx->completion_lock);
|
2022-12-02 17:47:24 +00:00
|
|
|
io_queue_deferred(ctx);
|
2022-06-19 11:26:06 +00:00
|
|
|
spin_unlock(&ctx->completion_lock);
|
|
|
|
}
|
|
|
|
if (ctx->has_evfd)
|
2022-08-30 12:50:12 +00:00
|
|
|
io_eventfd_flush_signal(ctx);
|
2022-06-19 11:26:06 +00:00
|
|
|
}
|
|
|
|
|
2022-12-07 15:50:01 +00:00
|
|
|
static inline void __io_cq_lock(struct io_ring_ctx *ctx)
|
|
|
|
{
|
2023-08-24 22:53:29 +00:00
|
|
|
if (!ctx->lockless_cq)
|
2022-12-07 15:50:01 +00:00
|
|
|
spin_lock(&ctx->completion_lock);
|
|
|
|
}
|
|
|
|
|
2022-12-02 17:47:23 +00:00
|
|
|
static inline void io_cq_lock(struct io_ring_ctx *ctx)
|
|
|
|
__acquires(ctx->completion_lock)
|
|
|
|
{
|
|
|
|
spin_lock(&ctx->completion_lock);
|
|
|
|
}
|
|
|
|
|
2022-12-07 15:50:01 +00:00
|
|
|
static inline void __io_cq_unlock_post(struct io_ring_ctx *ctx)
|
2023-01-09 14:46:10 +00:00
|
|
|
{
|
|
|
|
io_commit_cqring(ctx);
|
2023-08-24 22:53:28 +00:00
|
|
|
if (!ctx->task_complete) {
|
2023-08-24 22:53:29 +00:00
|
|
|
if (!ctx->lockless_cq)
|
|
|
|
spin_unlock(&ctx->completion_lock);
|
|
|
|
/* IOPOLL rings only need to wake up if it's also SQPOLL */
|
|
|
|
if (!ctx->syscall_iopoll)
|
|
|
|
io_cqring_wake(ctx);
|
2023-01-09 14:46:10 +00:00
|
|
|
}
|
2023-08-24 22:53:28 +00:00
|
|
|
io_commit_cqring_flush(ctx);
|
2023-01-09 14:46:10 +00:00
|
|
|
}
|
|
|
|
|
2023-06-23 11:23:30 +00:00
|
|
|
static void io_cq_unlock_post(struct io_ring_ctx *ctx)
|
2022-11-24 19:46:41 +00:00
|
|
|
__releases(ctx->completion_lock)
|
2022-06-20 00:25:56 +00:00
|
|
|
{
|
2022-12-07 15:50:01 +00:00
|
|
|
io_commit_cqring(ctx);
|
|
|
|
spin_unlock(&ctx->completion_lock);
|
|
|
|
io_cqring_wake(ctx);
|
2023-08-24 22:53:28 +00:00
|
|
|
io_commit_cqring_flush(ctx);
|
2022-06-20 00:25:56 +00:00
|
|
|
}
|
|
|
|
|
2024-04-10 01:26:55 +00:00
|
|
|
static void __io_cqring_overflow_flush(struct io_ring_ctx *ctx, bool dying)
|
io_uring: add support for backlogged CQ ring
Currently we drop completion events, if the CQ ring is full. That's fine
for requests with bounded completion times, but it may make it harder or
impossible to use io_uring with networked IO where request completion
times are generally unbounded. Or with POLL, for example, which is also
unbounded.
After this patch, we never overflow the ring, we simply store requests
in a backlog for later flushing. This flushing is done automatically by
the kernel. To prevent the backlog from growing indefinitely, if the
backlog is non-empty, we apply back pressure on IO submissions. Any
attempt to submit new IO with a non-empty backlog will get an -EBUSY
return from the kernel. This is a signal to the application that it has
backlogged CQ events, and that it must reap those before being allowed
to submit more IO.
Note that if we do return -EBUSY, we will have filled whatever
backlogged events into the CQ ring first, if there's room. This means
the application can safely reap events WITHOUT entering the kernel and
waiting for them, they are already available in the CQ ring.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-11-06 18:31:17 +00:00
|
|
|
{
|
2022-04-26 18:21:30 +00:00
|
|
|
size_t cqe_size = sizeof(struct io_uring_cqe);
|
io_uring: add support for backlogged CQ ring
Currently we drop completion events, if the CQ ring is full. That's fine
for requests with bounded completion times, but it may make it harder or
impossible to use io_uring with networked IO where request completion
times are generally unbounded. Or with POLL, for example, which is also
unbounded.
After this patch, we never overflow the ring, we simply store requests
in a backlog for later flushing. This flushing is done automatically by
the kernel. To prevent the backlog from growing indefinitely, if the
backlog is non-empty, we apply back pressure on IO submissions. Any
attempt to submit new IO with a non-empty backlog will get an -EBUSY
return from the kernel. This is a signal to the application that it has
backlogged CQ events, and that it must reap those before being allowed
to submit more IO.
Note that if we do return -EBUSY, we will have filled whatever
backlogged events into the CQ ring first, if there's room. This means
the application can safely reap events WITHOUT entering the kernel and
waiting for them, they are already available in the CQ ring.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-11-06 18:31:17 +00:00
|
|
|
|
2024-04-10 01:26:54 +00:00
|
|
|
lockdep_assert_held(&ctx->uring_lock);
|
|
|
|
|
2024-04-12 19:16:20 +00:00
|
|
|
/* don't abort if we're dying, entries must get freed */
|
|
|
|
if (!dying && __io_cqring_events(ctx) == ctx->cq_entries)
|
2022-12-07 03:53:29 +00:00
|
|
|
return;
|
io_uring: add support for backlogged CQ ring
Currently we drop completion events, if the CQ ring is full. That's fine
for requests with bounded completion times, but it may make it harder or
impossible to use io_uring with networked IO where request completion
times are generally unbounded. Or with POLL, for example, which is also
unbounded.
After this patch, we never overflow the ring, we simply store requests
in a backlog for later flushing. This flushing is done automatically by
the kernel. To prevent the backlog from growing indefinitely, if the
backlog is non-empty, we apply back pressure on IO submissions. Any
attempt to submit new IO with a non-empty backlog will get an -EBUSY
return from the kernel. This is a signal to the application that it has
backlogged CQ events, and that it must reap those before being allowed
to submit more IO.
Note that if we do return -EBUSY, we will have filled whatever
backlogged events into the CQ ring first, if there's room. This means
the application can safely reap events WITHOUT entering the kernel and
waiting for them, they are already available in the CQ ring.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-11-06 18:31:17 +00:00
|
|
|
|
2022-04-26 18:21:30 +00:00
|
|
|
if (ctx->flags & IORING_SETUP_CQE32)
|
|
|
|
cqe_size <<= 1;
|
|
|
|
|
2022-06-20 00:25:56 +00:00
|
|
|
io_cq_lock(ctx);
|
2021-02-23 12:40:22 +00:00
|
|
|
while (!list_empty(&ctx->cq_overflow_list)) {
|
2023-08-24 22:53:27 +00:00
|
|
|
struct io_uring_cqe *cqe;
|
2021-02-23 12:40:22 +00:00
|
|
|
struct io_overflow_cqe *ocqe;
|
2020-09-28 19:10:13 +00:00
|
|
|
|
2021-02-23 12:40:22 +00:00
|
|
|
ocqe = list_first_entry(&ctx->cq_overflow_list,
|
|
|
|
struct io_overflow_cqe, list);
|
2024-04-10 01:26:55 +00:00
|
|
|
|
|
|
|
if (!dying) {
|
|
|
|
if (!io_get_cqe_overflow(ctx, &cqe, true))
|
|
|
|
break;
|
|
|
|
memcpy(cqe, &ocqe->cqe, cqe_size);
|
|
|
|
}
|
2021-02-23 12:40:22 +00:00
|
|
|
list_del(&ocqe->list);
|
|
|
|
kfree(ocqe);
|
io_uring: add support for backlogged CQ ring
Currently we drop completion events, if the CQ ring is full. That's fine
for requests with bounded completion times, but it may make it harder or
impossible to use io_uring with networked IO where request completion
times are generally unbounded. Or with POLL, for example, which is also
unbounded.
After this patch, we never overflow the ring, we simply store requests
in a backlog for later flushing. This flushing is done automatically by
the kernel. To prevent the backlog from growing indefinitely, if the
backlog is non-empty, we apply back pressure on IO submissions. Any
attempt to submit new IO with a non-empty backlog will get an -EBUSY
return from the kernel. This is a signal to the application that it has
backlogged CQ events, and that it must reap those before being allowed
to submit more IO.
Note that if we do return -EBUSY, we will have filled whatever
backlogged events into the CQ ring first, if there's room. This means
the application can safely reap events WITHOUT entering the kernel and
waiting for them, they are already available in the CQ ring.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-11-06 18:31:17 +00:00
|
|
|
}
|
|
|
|
|
2022-12-07 03:53:29 +00:00
|
|
|
if (list_empty(&ctx->cq_overflow_list)) {
|
2022-04-21 09:13:43 +00:00
|
|
|
clear_bit(IO_CHECK_CQ_OVERFLOW_BIT, &ctx->check_cq);
|
2022-04-26 01:49:00 +00:00
|
|
|
atomic_andnot(IORING_SQ_CQ_OVERFLOW, &ctx->rings->sq_flags);
|
2020-12-17 00:24:38 +00:00
|
|
|
}
|
2022-06-20 00:25:56 +00:00
|
|
|
io_cq_unlock_post(ctx);
|
io_uring: add support for backlogged CQ ring
Currently we drop completion events, if the CQ ring is full. That's fine
for requests with bounded completion times, but it may make it harder or
impossible to use io_uring with networked IO where request completion
times are generally unbounded. Or with POLL, for example, which is also
unbounded.
After this patch, we never overflow the ring, we simply store requests
in a backlog for later flushing. This flushing is done automatically by
the kernel. To prevent the backlog from growing indefinitely, if the
backlog is non-empty, we apply back pressure on IO submissions. Any
attempt to submit new IO with a non-empty backlog will get an -EBUSY
return from the kernel. This is a signal to the application that it has
backlogged CQ events, and that it must reap those before being allowed
to submit more IO.
Note that if we do return -EBUSY, we will have filled whatever
backlogged events into the CQ ring first, if there's room. This means
the application can safely reap events WITHOUT entering the kernel and
waiting for them, they are already available in the CQ ring.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-11-06 18:31:17 +00:00
|
|
|
}
|
|
|
|
|
2024-04-10 01:26:55 +00:00
|
|
|
static void io_cqring_overflow_kill(struct io_ring_ctx *ctx)
|
2022-12-21 14:05:09 +00:00
|
|
|
{
|
2024-04-10 01:26:55 +00:00
|
|
|
if (ctx->rings)
|
|
|
|
__io_cqring_overflow_flush(ctx, true);
|
2022-12-21 14:05:09 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static void io_cqring_do_overflow_flush(struct io_ring_ctx *ctx)
|
2021-01-04 20:36:36 +00:00
|
|
|
{
|
2024-04-10 01:26:54 +00:00
|
|
|
mutex_lock(&ctx->uring_lock);
|
2024-04-10 01:26:55 +00:00
|
|
|
__io_cqring_overflow_flush(ctx, false);
|
2024-04-10 01:26:54 +00:00
|
|
|
mutex_unlock(&ctx->uring_lock);
|
2021-01-04 20:36:36 +00:00
|
|
|
}
|
|
|
|
|
2023-01-23 14:37:17 +00:00
|
|
|
/* can be called by any task */
|
2023-06-23 11:23:25 +00:00
|
|
|
static void io_put_task_remote(struct task_struct *task)
|
2021-08-09 12:04:13 +00:00
|
|
|
{
|
|
|
|
struct io_uring_task *tctx = task->io_uring;
|
|
|
|
|
2023-06-23 11:23:25 +00:00
|
|
|
percpu_counter_sub(&tctx->inflight, 1);
|
2023-02-17 15:27:23 +00:00
|
|
|
if (unlikely(atomic_read(&tctx->in_cancel)))
|
2022-03-25 11:52:15 +00:00
|
|
|
wake_up(&tctx->wait);
|
2023-06-23 11:23:25 +00:00
|
|
|
put_task_struct(task);
|
2022-03-25 11:52:15 +00:00
|
|
|
}
|
|
|
|
|
2023-01-23 14:37:17 +00:00
|
|
|
/* used by a task to put its own references */
|
2023-06-23 11:23:25 +00:00
|
|
|
static void io_put_task_local(struct task_struct *task)
|
2023-01-23 14:37:17 +00:00
|
|
|
{
|
2023-06-23 11:23:25 +00:00
|
|
|
task->io_uring->cached_refs++;
|
2023-01-23 14:37:17 +00:00
|
|
|
}
|
|
|
|
|
2023-01-16 16:48:58 +00:00
|
|
|
/* must to be called somewhat shortly after putting a request */
|
2023-06-23 11:23:25 +00:00
|
|
|
static inline void io_put_task(struct task_struct *task)
|
2023-01-16 16:48:58 +00:00
|
|
|
{
|
|
|
|
if (likely(task == current))
|
2023-06-23 11:23:25 +00:00
|
|
|
io_put_task_local(task);
|
2023-01-16 16:48:58 +00:00
|
|
|
else
|
2023-06-23 11:23:25 +00:00
|
|
|
io_put_task_remote(task);
|
2023-01-16 16:48:58 +00:00
|
|
|
}
|
|
|
|
|
2022-07-12 20:52:47 +00:00
|
|
|
void io_task_refs_refill(struct io_uring_task *tctx)
|
2021-08-27 10:55:01 +00:00
|
|
|
{
|
|
|
|
unsigned int refill = -tctx->cached_refs + IO_TCTX_REFS_CACHE_NR;
|
|
|
|
|
|
|
|
percpu_counter_add(&tctx->inflight, refill);
|
|
|
|
refcount_add(refill, ¤t->usage);
|
|
|
|
tctx->cached_refs += refill;
|
|
|
|
}
|
|
|
|
|
2022-01-09 00:53:22 +00:00
|
|
|
static __cold void io_uring_drop_tctx_refs(struct task_struct *task)
|
|
|
|
{
|
|
|
|
struct io_uring_task *tctx = task->io_uring;
|
|
|
|
unsigned int refs = tctx->cached_refs;
|
|
|
|
|
|
|
|
if (refs) {
|
|
|
|
tctx->cached_refs = 0;
|
|
|
|
percpu_counter_sub(&tctx->inflight, refs);
|
|
|
|
put_task_struct_many(task, refs);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2022-06-17 08:48:02 +00:00
|
|
|
static bool io_cqring_event_overflow(struct io_ring_ctx *ctx, u64 user_data,
|
|
|
|
s32 res, u32 cflags, u64 extra1, u64 extra2)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
{
|
2021-04-13 01:58:44 +00:00
|
|
|
struct io_overflow_cqe *ocqe;
|
2022-04-26 18:21:30 +00:00
|
|
|
size_t ocq_size = sizeof(struct io_overflow_cqe);
|
|
|
|
bool is_cqe32 = (ctx->flags & IORING_SETUP_CQE32);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
2023-01-04 01:34:57 +00:00
|
|
|
lockdep_assert_held(&ctx->completion_lock);
|
|
|
|
|
2022-04-26 18:21:30 +00:00
|
|
|
if (is_cqe32)
|
|
|
|
ocq_size += sizeof(struct io_uring_cqe);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
2022-04-26 18:21:30 +00:00
|
|
|
ocqe = kmalloc(ocq_size, GFP_ATOMIC | __GFP_ACCOUNT);
|
2022-04-21 09:13:41 +00:00
|
|
|
trace_io_uring_cqe_overflow(ctx, user_data, res, cflags, ocqe);
|
2021-04-13 01:58:44 +00:00
|
|
|
if (!ocqe) {
|
|
|
|
/*
|
|
|
|
* If we're in ring overflow flush mode, or in task cancel mode,
|
|
|
|
* or cannot allocate an overflow entry, then we need to drop it
|
|
|
|
* on the floor.
|
|
|
|
*/
|
2021-05-16 21:58:10 +00:00
|
|
|
io_account_cq_overflow(ctx);
|
2022-04-21 09:13:44 +00:00
|
|
|
set_bit(IO_CHECK_CQ_DROPPED_BIT, &ctx->check_cq);
|
2021-04-13 01:58:44 +00:00
|
|
|
return false;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
}
|
2021-04-13 01:58:44 +00:00
|
|
|
if (list_empty(&ctx->cq_overflow_list)) {
|
2022-04-21 09:13:43 +00:00
|
|
|
set_bit(IO_CHECK_CQ_OVERFLOW_BIT, &ctx->check_cq);
|
2022-04-26 01:49:00 +00:00
|
|
|
atomic_or(IORING_SQ_CQ_OVERFLOW, &ctx->rings->sq_flags);
|
2021-08-08 00:13:42 +00:00
|
|
|
|
2021-04-13 01:58:44 +00:00
|
|
|
}
|
2021-04-25 13:32:17 +00:00
|
|
|
ocqe->cqe.user_data = user_data;
|
2021-04-13 01:58:44 +00:00
|
|
|
ocqe->cqe.res = res;
|
|
|
|
ocqe->cqe.flags = cflags;
|
2022-04-26 18:21:30 +00:00
|
|
|
if (is_cqe32) {
|
|
|
|
ocqe->cqe.big_cqe[0] = extra1;
|
|
|
|
ocqe->cqe.big_cqe[1] = extra2;
|
|
|
|
}
|
2021-04-13 01:58:44 +00:00
|
|
|
list_add_tail(&ocqe->list, &ctx->cq_overflow_list);
|
|
|
|
return true;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
}
|
|
|
|
|
2024-04-10 01:26:51 +00:00
|
|
|
static void io_req_cqe_overflow(struct io_kiocb *req)
|
2022-06-17 08:48:02 +00:00
|
|
|
{
|
2023-08-11 12:53:44 +00:00
|
|
|
io_cqring_event_overflow(req->ctx, req->cqe.user_data,
|
|
|
|
req->cqe.res, req->cqe.flags,
|
2023-08-24 22:53:25 +00:00
|
|
|
req->big_cqe.extra1, req->big_cqe.extra2);
|
|
|
|
memset(&req->big_cqe, 0, sizeof(req->big_cqe));
|
2022-06-17 08:48:02 +00:00
|
|
|
}
|
|
|
|
|
2022-06-17 08:48:01 +00:00
|
|
|
/*
|
|
|
|
* writes to the cq entry need to come after reading head; the
|
|
|
|
* control dependency is enough as we're using WRITE_ONCE to
|
|
|
|
* fill the cq entry
|
|
|
|
*/
|
2023-08-24 22:53:26 +00:00
|
|
|
bool io_cqe_cache_refill(struct io_ring_ctx *ctx, bool overflow)
|
2022-06-17 08:48:01 +00:00
|
|
|
{
|
|
|
|
struct io_rings *rings = ctx->rings;
|
|
|
|
unsigned int off = ctx->cached_cq_tail & (ctx->cq_entries - 1);
|
|
|
|
unsigned int free, queued, len;
|
|
|
|
|
2022-09-23 13:53:25 +00:00
|
|
|
/*
|
|
|
|
* Posting into the CQ when there are pending overflowed CQEs may break
|
|
|
|
* ordering guarantees, which will affect links, F_MORE users and more.
|
|
|
|
* Force overflow the completion.
|
|
|
|
*/
|
|
|
|
if (!overflow && (ctx->check_cq & BIT(IO_CHECK_CQ_OVERFLOW_BIT)))
|
2023-08-24 22:53:26 +00:00
|
|
|
return false;
|
2022-06-17 08:48:01 +00:00
|
|
|
|
|
|
|
/* userspace may cheat modifying the tail, be safe and do min */
|
|
|
|
queued = min(__io_cqring_events(ctx), ctx->cq_entries);
|
|
|
|
free = ctx->cq_entries - queued;
|
|
|
|
/* we need a contiguous range, limit based on the current array offset */
|
|
|
|
len = min(free, ctx->cq_entries - off);
|
|
|
|
if (!len)
|
2023-08-24 22:53:26 +00:00
|
|
|
return false;
|
2022-06-17 08:48:01 +00:00
|
|
|
|
2022-06-17 08:48:05 +00:00
|
|
|
if (ctx->flags & IORING_SETUP_CQE32) {
|
|
|
|
off <<= 1;
|
|
|
|
len <<= 1;
|
|
|
|
}
|
|
|
|
|
2022-06-17 08:48:01 +00:00
|
|
|
ctx->cqe_cached = &rings->cqes[off];
|
|
|
|
ctx->cqe_sentinel = ctx->cqe_cached + len;
|
2023-08-24 22:53:26 +00:00
|
|
|
return true;
|
2022-06-17 08:48:01 +00:00
|
|
|
}
|
|
|
|
|
2022-12-07 15:50:01 +00:00
|
|
|
static bool io_fill_cqe_aux(struct io_ring_ctx *ctx, u64 user_data, s32 res,
|
|
|
|
u32 cflags)
|
io_uring: support buffer selection for OP_READ and OP_RECV
If a server process has tons of pending socket connections, generally
it uses epoll to wait for activity. When the socket is ready for reading
(or writing), the task can select a buffer and issue a recv/send on the
given fd.
Now that we have fast (non-async thread) support, a task can have tons
of pending reads or writes pending. But that means they need buffers to
back that data, and if the number of connections is high enough, having
them preallocated for all possible connections is unfeasible.
With IORING_OP_PROVIDE_BUFFERS, an application can register buffers to
use for any request. The request then sets IOSQE_BUFFER_SELECT in the
sqe, and a given group ID in sqe->buf_group. When the fd becomes ready,
a free buffer from the specified group is selected. If none are
available, the request is terminated with -ENOBUFS. If successful, the
CQE on completion will contain the buffer ID chosen in the cqe->flags
member, encoded as:
(buffer_id << IORING_CQE_BUFFER_SHIFT) | IORING_CQE_F_BUFFER;
Once a buffer has been consumed by a request, it is no longer available
and must be registered again with IORING_OP_PROVIDE_BUFFERS.
Requests need to support this feature. For now, IORING_OP_READ and
IORING_OP_RECV support it. This is checked on SQE submission, a CQE with
res == -EOPNOTSUPP will be posted if attempted on unsupported requests.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-23 23:42:51 +00:00
|
|
|
{
|
2022-06-15 10:23:06 +00:00
|
|
|
struct io_uring_cqe *cqe;
|
|
|
|
|
2021-11-10 15:49:31 +00:00
|
|
|
ctx->cq_extra++;
|
2022-06-15 10:23:06 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* If we can't get a cq entry, userspace overflowed the
|
|
|
|
* submission (by quite a lot). Increment the overflow count in
|
|
|
|
* the ring.
|
|
|
|
*/
|
2023-08-24 22:53:27 +00:00
|
|
|
if (likely(io_get_cqe(ctx, &cqe))) {
|
2022-06-30 09:12:31 +00:00
|
|
|
trace_io_uring_complete(ctx, NULL, user_data, res, cflags, 0, 0);
|
|
|
|
|
2022-06-15 10:23:06 +00:00
|
|
|
WRITE_ONCE(cqe->user_data, user_data);
|
|
|
|
WRITE_ONCE(cqe->res, res);
|
|
|
|
WRITE_ONCE(cqe->flags, cflags);
|
2022-06-15 10:23:07 +00:00
|
|
|
|
|
|
|
if (ctx->flags & IORING_SETUP_CQE32) {
|
|
|
|
WRITE_ONCE(cqe->big_cqe[0], 0);
|
|
|
|
WRITE_ONCE(cqe->big_cqe[1], 0);
|
|
|
|
}
|
2022-06-15 10:23:06 +00:00
|
|
|
return true;
|
|
|
|
}
|
2022-06-30 09:12:26 +00:00
|
|
|
return false;
|
io_uring: support buffer selection for OP_READ and OP_RECV
If a server process has tons of pending socket connections, generally
it uses epoll to wait for activity. When the socket is ready for reading
(or writing), the task can select a buffer and issue a recv/send on the
given fd.
Now that we have fast (non-async thread) support, a task can have tons
of pending reads or writes pending. But that means they need buffers to
back that data, and if the number of connections is high enough, having
them preallocated for all possible connections is unfeasible.
With IORING_OP_PROVIDE_BUFFERS, an application can register buffers to
use for any request. The request then sets IOSQE_BUFFER_SELECT in the
sqe, and a given group ID in sqe->buf_group. When the fd becomes ready,
a free buffer from the specified group is selected. If none are
available, the request is terminated with -ENOBUFS. If successful, the
CQE on completion will contain the buffer ID chosen in the cqe->flags
member, encoded as:
(buffer_id << IORING_CQE_BUFFER_SHIFT) | IORING_CQE_F_BUFFER;
Once a buffer has been consumed by a request, it is no longer available
and must be registered again with IORING_OP_PROVIDE_BUFFERS.
Requests need to support this feature. For now, IORING_OP_READ and
IORING_OP_RECV support it. This is checked on SQE submission, a CQE with
res == -EOPNOTSUPP will be posted if attempted on unsupported requests.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-23 23:42:51 +00:00
|
|
|
}
|
|
|
|
|
2024-06-06 16:28:26 +00:00
|
|
|
static bool __io_post_aux_cqe(struct io_ring_ctx *ctx, u64 user_data, s32 res,
|
|
|
|
u32 cflags)
|
2022-06-17 08:48:00 +00:00
|
|
|
{
|
|
|
|
bool filled;
|
|
|
|
|
2022-12-07 15:50:01 +00:00
|
|
|
filled = io_fill_cqe_aux(ctx, user_data, res, cflags);
|
2024-03-18 22:00:31 +00:00
|
|
|
if (!filled)
|
2022-12-07 15:50:01 +00:00
|
|
|
filled = io_cqring_event_overflow(ctx, user_data, res, cflags, 0, 0);
|
|
|
|
|
2024-06-06 16:28:26 +00:00
|
|
|
return filled;
|
|
|
|
}
|
|
|
|
|
|
|
|
bool io_post_aux_cqe(struct io_ring_ctx *ctx, u64 user_data, s32 res, u32 cflags)
|
|
|
|
{
|
|
|
|
bool filled;
|
|
|
|
|
|
|
|
io_cq_lock(ctx);
|
|
|
|
filled = __io_post_aux_cqe(ctx, user_data, res, cflags);
|
2022-06-20 00:25:56 +00:00
|
|
|
io_cq_unlock_post(ctx);
|
2022-06-17 08:48:00 +00:00
|
|
|
return filled;
|
|
|
|
}
|
|
|
|
|
2024-06-06 16:28:26 +00:00
|
|
|
/*
|
|
|
|
* Must be called from inline task_work so we now a flush will happen later,
|
|
|
|
* and obviously with ctx->uring_lock held (tw always has that).
|
|
|
|
*/
|
|
|
|
void io_add_aux_cqe(struct io_ring_ctx *ctx, u64 user_data, s32 res, u32 cflags)
|
|
|
|
{
|
2024-07-02 13:38:11 +00:00
|
|
|
if (!io_fill_cqe_aux(ctx, user_data, res, cflags)) {
|
|
|
|
spin_lock(&ctx->completion_lock);
|
|
|
|
io_cqring_event_overflow(ctx, user_data, res, cflags, 0, 0);
|
|
|
|
spin_unlock(&ctx->completion_lock);
|
|
|
|
}
|
2024-06-06 16:28:26 +00:00
|
|
|
ctx->submit_state.cq_flush = true;
|
|
|
|
}
|
|
|
|
|
2023-08-11 12:53:45 +00:00
|
|
|
/*
|
|
|
|
* A helper for multishot requests posting additional CQEs.
|
|
|
|
* Should only be used from a task_work including IO_URING_F_MULTISHOT.
|
|
|
|
*/
|
2024-03-18 22:00:31 +00:00
|
|
|
bool io_req_post_cqe(struct io_kiocb *req, s32 res, u32 cflags)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
{
|
2023-06-07 20:41:20 +00:00
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
2024-03-18 22:00:32 +00:00
|
|
|
bool posted;
|
2022-11-24 09:35:55 +00:00
|
|
|
|
2024-03-08 13:55:57 +00:00
|
|
|
lockdep_assert(!io_wq_current_is_worker());
|
2022-11-24 09:35:55 +00:00
|
|
|
lockdep_assert_held(&ctx->uring_lock);
|
|
|
|
|
2024-03-18 22:00:32 +00:00
|
|
|
__io_cq_lock(ctx);
|
|
|
|
posted = io_fill_cqe_aux(ctx, req->cqe.user_data, res, cflags);
|
|
|
|
ctx->submit_state.cq_flush = true;
|
|
|
|
__io_cq_unlock_post(ctx);
|
|
|
|
return posted;
|
2022-11-24 09:35:55 +00:00
|
|
|
}
|
|
|
|
|
2024-03-18 22:00:34 +00:00
|
|
|
static void io_req_complete_post(struct io_kiocb *req, unsigned issue_flags)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
{
|
2022-11-23 11:33:40 +00:00
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
|
|
|
|
2024-04-05 15:50:03 +00:00
|
|
|
/*
|
|
|
|
* All execution paths but io-wq use the deferred completions by
|
|
|
|
* passing IO_URING_F_COMPLETE_DEFER and thus should not end up here.
|
|
|
|
*/
|
|
|
|
if (WARN_ON_ONCE(!(issue_flags & IO_URING_F_IOWQ)))
|
|
|
|
return;
|
|
|
|
|
2024-03-18 22:00:34 +00:00
|
|
|
/*
|
|
|
|
* Handle special CQ sync cases via task_work. DEFER_TASKRUN requires
|
|
|
|
* the submitter task context, IOPOLL protects with uring_lock.
|
|
|
|
*/
|
|
|
|
if (ctx->task_complete || (ctx->flags & IORING_SETUP_IOPOLL)) {
|
|
|
|
req->io_task_work.func = io_req_task_complete;
|
|
|
|
io_req_task_work_add(req);
|
|
|
|
return;
|
|
|
|
}
|
2022-11-23 11:33:40 +00:00
|
|
|
|
|
|
|
io_cq_lock(ctx);
|
2023-08-11 12:53:43 +00:00
|
|
|
if (!(req->flags & REQ_F_CQE_SKIP)) {
|
|
|
|
if (!io_fill_cqe_req(ctx, req))
|
|
|
|
io_req_cqe_overflow(req);
|
|
|
|
}
|
2022-06-20 00:25:56 +00:00
|
|
|
io_cq_unlock_post(ctx);
|
2022-11-23 11:33:40 +00:00
|
|
|
|
2021-02-10 02:53:37 +00:00
|
|
|
/*
|
2024-04-05 15:50:03 +00:00
|
|
|
* We don't free the request here because we know it's called from
|
|
|
|
* io-wq only, which holds a reference, so it cannot be the last put.
|
2021-02-10 02:53:37 +00:00
|
|
|
*/
|
2024-04-05 15:50:02 +00:00
|
|
|
req_ref_put(req);
|
2019-11-08 15:52:53 +00:00
|
|
|
}
|
|
|
|
|
2022-11-24 09:35:53 +00:00
|
|
|
void io_req_defer_failed(struct io_kiocb *req, s32 res)
|
2022-11-23 11:33:37 +00:00
|
|
|
__must_hold(&ctx->uring_lock)
|
2021-02-28 22:35:12 +00:00
|
|
|
{
|
2023-01-12 14:44:11 +00:00
|
|
|
const struct io_cold_def *def = &io_cold_defs[req->opcode];
|
2022-09-21 11:17:46 +00:00
|
|
|
|
2022-11-23 11:33:37 +00:00
|
|
|
lockdep_assert_held(&req->ctx->uring_lock);
|
|
|
|
|
2021-05-16 21:58:05 +00:00
|
|
|
req_set_fail(req);
|
2022-05-24 21:21:00 +00:00
|
|
|
io_req_set_res(req, res, io_put_kbuf(req, IO_URING_F_UNLOCKED));
|
2022-09-21 11:17:46 +00:00
|
|
|
if (def->fail)
|
|
|
|
def->fail(req);
|
2022-11-24 09:35:53 +00:00
|
|
|
io_req_complete_defer(req);
|
2021-02-28 22:35:12 +00:00
|
|
|
}
|
|
|
|
|
2021-08-09 12:04:08 +00:00
|
|
|
/*
|
|
|
|
* Don't initialise the fields below on every allocation, but do that in
|
|
|
|
* advance and keep them valid across allocations.
|
|
|
|
*/
|
|
|
|
static void io_preinit_req(struct io_kiocb *req, struct io_ring_ctx *ctx)
|
|
|
|
{
|
|
|
|
req->ctx = ctx;
|
|
|
|
req->link = NULL;
|
|
|
|
req->async_data = NULL;
|
|
|
|
/* not necessary, but safer to zero */
|
2023-08-24 22:53:24 +00:00
|
|
|
memset(&req->cqe, 0, sizeof(req->cqe));
|
2023-08-24 22:53:25 +00:00
|
|
|
memset(&req->big_cqe, 0, sizeof(req->big_cqe));
|
2021-08-09 12:04:08 +00:00
|
|
|
}
|
|
|
|
|
io_uring: remove submission references
Requests are by default given with two references, submission and
completion. Completion references are straightforward, they represent
request ownership and are put when a request is completed or so.
Submission references are a bit more trickier. They're needed when
io_issue_sqe() followed deep into the submission stack (e.g. in fs,
block, drivers, etc.), request may have given away for concurrent
execution or already completed, and the code unwinding back to
io_issue_sqe() may be accessing some pieces of our requests, e.g.
file or iov.
Now, we prevent such async/in-depth completions by pushing requests
through task_work. Punting to io-wq is also done through task_works,
apart from a couple of cases with a pretty well known context. So,
there're two cases:
1) io_issue_sqe() from the task context and protected by ->uring_lock.
Either requests return back to io_uring or handed to task_work, which
won't be executed because we're currently controlling that task. So,
we can be sure that requests are staying alive all the time and we don't
need submission references to pin them.
2) io_issue_sqe() from io-wq, which doesn't hold the mutex. The role of
submission reference is played by io-wq reference, which is put by
io_wq_submit_work(). Hence, it should be fine.
Considering that, we can carefully kill the submission reference.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/6b68f1c763229a590f2a27148aee77767a8d7750.1628705069.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-08-11 18:28:29 +00:00
|
|
|
/*
|
|
|
|
* A request might get retired back into the request caches even before opcode
|
|
|
|
* handlers and io_issue_sqe() are done with it, e.g. inline completion path.
|
|
|
|
* Because of that, io_alloc_req() should be called only under ->uring_lock
|
|
|
|
* and with extra caution to not get a request that is still worked on.
|
|
|
|
*/
|
2022-07-27 09:30:40 +00:00
|
|
|
__cold bool __io_alloc_req_refill(struct io_ring_ctx *ctx)
|
io_uring: remove submission references
Requests are by default given with two references, submission and
completion. Completion references are straightforward, they represent
request ownership and are put when a request is completed or so.
Submission references are a bit more trickier. They're needed when
io_issue_sqe() followed deep into the submission stack (e.g. in fs,
block, drivers, etc.), request may have given away for concurrent
execution or already completed, and the code unwinding back to
io_issue_sqe() may be accessing some pieces of our requests, e.g.
file or iov.
Now, we prevent such async/in-depth completions by pushing requests
through task_work. Punting to io-wq is also done through task_works,
apart from a couple of cases with a pretty well known context. So,
there're two cases:
1) io_issue_sqe() from the task context and protected by ->uring_lock.
Either requests return back to io_uring or handed to task_work, which
won't be executed because we're currently controlling that task. So,
we can be sure that requests are staying alive all the time and we don't
need submission references to pin them.
2) io_issue_sqe() from io-wq, which doesn't hold the mutex. The role of
submission reference is played by io-wq reference, which is put by
io_wq_submit_work(). Hence, it should be fine.
Considering that, we can carefully kill the submission reference.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/6b68f1c763229a590f2a27148aee77767a8d7750.1628705069.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-08-11 18:28:29 +00:00
|
|
|
__must_hold(&ctx->uring_lock)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
{
|
2021-08-09 12:04:08 +00:00
|
|
|
gfp_t gfp = GFP_KERNEL | __GFP_NOWARN;
|
2021-09-24 20:59:45 +00:00
|
|
|
void *reqs[IO_REQ_ALLOC_BATCH];
|
2024-03-26 01:07:22 +00:00
|
|
|
int ret;
|
2021-02-10 00:03:23 +00:00
|
|
|
|
2021-09-24 20:59:45 +00:00
|
|
|
ret = kmem_cache_alloc_bulk(req_cachep, gfp, ARRAY_SIZE(reqs), reqs);
|
2019-03-14 22:30:06 +00:00
|
|
|
|
2021-08-09 12:04:08 +00:00
|
|
|
/*
|
|
|
|
* Bulk alloc is all-or-nothing. If we fail to get a batch,
|
|
|
|
* retry single alloc to be on the safe side.
|
|
|
|
*/
|
|
|
|
if (unlikely(ret <= 0)) {
|
2021-09-24 20:59:45 +00:00
|
|
|
reqs[0] = kmem_cache_alloc(req_cachep, gfp);
|
|
|
|
if (!reqs[0])
|
2021-10-04 19:02:49 +00:00
|
|
|
return false;
|
2021-08-09 12:04:08 +00:00
|
|
|
ret = 1;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
}
|
2021-08-09 12:04:08 +00:00
|
|
|
|
2021-10-04 19:02:53 +00:00
|
|
|
percpu_ref_get_many(&ctx->refs, ret);
|
2024-03-26 01:07:22 +00:00
|
|
|
while (ret--) {
|
|
|
|
struct io_kiocb *req = reqs[ret];
|
2021-09-24 20:59:45 +00:00
|
|
|
|
|
|
|
io_preinit_req(req, ctx);
|
2022-04-12 14:09:48 +00:00
|
|
|
io_req_add_to_cache(req, ctx);
|
2021-09-24 20:59:45 +00:00
|
|
|
}
|
2021-10-04 19:02:49 +00:00
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
2023-04-04 12:39:48 +00:00
|
|
|
__cold void io_free_req(struct io_kiocb *req)
|
|
|
|
{
|
2023-06-23 11:23:22 +00:00
|
|
|
/* refs were already put, restore them for io_req_task_complete() */
|
|
|
|
req->flags &= ~REQ_F_REFCOUNT;
|
|
|
|
/* we only want to free it, don't post CQEs */
|
|
|
|
req->flags |= REQ_F_CQE_SKIP;
|
|
|
|
req->io_task_work.func = io_req_task_complete;
|
2023-04-04 12:39:48 +00:00
|
|
|
io_req_task_work_add(req);
|
|
|
|
}
|
|
|
|
|
2021-09-08 15:40:51 +00:00
|
|
|
static void __io_req_find_next_prep(struct io_kiocb *req)
|
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
|
|
|
|
2022-12-02 17:47:23 +00:00
|
|
|
spin_lock(&ctx->completion_lock);
|
2022-06-20 00:25:55 +00:00
|
|
|
io_disarm_next(req);
|
2022-12-02 17:47:23 +00:00
|
|
|
spin_unlock(&ctx->completion_lock);
|
2021-09-08 15:40:51 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline struct io_kiocb *io_req_find_next(struct io_kiocb *req)
|
2019-11-09 03:00:08 +00:00
|
|
|
{
|
2021-03-09 00:37:58 +00:00
|
|
|
struct io_kiocb *nxt;
|
2019-11-21 20:21:01 +00:00
|
|
|
|
2019-05-10 22:07:28 +00:00
|
|
|
/*
|
|
|
|
* If LINK is set, we have dependent requests in this chain. If we
|
|
|
|
* didn't fail this request, queue the first one up, moving any other
|
|
|
|
* dependencies to the next request. In case of failure, fail the rest
|
|
|
|
* of the chain.
|
|
|
|
*/
|
2021-09-08 15:40:51 +00:00
|
|
|
if (unlikely(req->flags & IO_DISARM_MASK))
|
|
|
|
__io_req_find_next_prep(req);
|
2021-03-09 00:37:58 +00:00
|
|
|
nxt = req->link;
|
|
|
|
req->link = NULL;
|
|
|
|
return nxt;
|
2019-11-20 20:03:52 +00:00
|
|
|
}
|
2019-05-10 22:07:28 +00:00
|
|
|
|
2023-03-27 15:38:15 +00:00
|
|
|
static void ctx_flush_and_put(struct io_ring_ctx *ctx, struct io_tw_state *ts)
|
io_uring: fix __tctx_task_work() ctx race
There is an unlikely but possible race using a freed context. That's
because req->task_work.func() can free a request, but we won't
necessarily find a completion in submit_state.comp and so all ctx refs
may be put by the time we do mutex_lock(&ctx->uring_ctx);
There are several reasons why it can miss going through
submit_state.comp: 1) req->task_work.func() didn't complete it itself,
but punted to iowq (e.g. reissue) and it got freed later, or a similar
situation with it overflowing and getting flushed by someone else, or
being submitted to IRQ completion, 2) As we don't hold the uring_lock,
someone else can do io_submit_flush_completions() and put our ref.
3) Bugs and code obscurities, e.g. failing to propagate issue_flags
properly.
One example is as follows
CPU1 | CPU2
=======================================================================
@req->task_work.func() |
-> @req overflwed, |
so submit_state.comp,nr==0 |
| flush overflows, and free @req
| ctx refs == 0, free it
ctx is dead, but we do |
lock + flush + unlock |
So take a ctx reference for each new ctx we see in __tctx_task_work(),
and do release it until we do all our flushing.
Fixes: 65453d1efbd2 ("io_uring: enable req cache for task_work items")
Reported-by: syzbot+a157ac7c03a56397f553@syzkaller.appspotmail.com
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
[axboe: fold in my one-liner and fix ref mismatch]
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-02-28 22:04:53 +00:00
|
|
|
{
|
|
|
|
if (!ctx)
|
|
|
|
return;
|
2022-04-26 01:49:04 +00:00
|
|
|
if (ctx->flags & IORING_SETUP_TASKRUN_FLAG)
|
|
|
|
atomic_andnot(IORING_SQ_TASKRUN, &ctx->rings->sq_flags);
|
2024-03-18 22:00:30 +00:00
|
|
|
|
|
|
|
io_submit_flush_completions(ctx);
|
|
|
|
mutex_unlock(&ctx->uring_lock);
|
io_uring: fix __tctx_task_work() ctx race
There is an unlikely but possible race using a freed context. That's
because req->task_work.func() can free a request, but we won't
necessarily find a completion in submit_state.comp and so all ctx refs
may be put by the time we do mutex_lock(&ctx->uring_ctx);
There are several reasons why it can miss going through
submit_state.comp: 1) req->task_work.func() didn't complete it itself,
but punted to iowq (e.g. reissue) and it got freed later, or a similar
situation with it overflowing and getting flushed by someone else, or
being submitted to IRQ completion, 2) As we don't hold the uring_lock,
someone else can do io_submit_flush_completions() and put our ref.
3) Bugs and code obscurities, e.g. failing to propagate issue_flags
properly.
One example is as follows
CPU1 | CPU2
=======================================================================
@req->task_work.func() |
-> @req overflwed, |
so submit_state.comp,nr==0 |
| flush overflows, and free @req
| ctx refs == 0, free it
ctx is dead, but we do |
lock + flush + unlock |
So take a ctx reference for each new ctx we see in __tctx_task_work(),
and do release it until we do all our flushing.
Fixes: 65453d1efbd2 ("io_uring: enable req cache for task_work items")
Reported-by: syzbot+a157ac7c03a56397f553@syzkaller.appspotmail.com
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
[axboe: fold in my one-liner and fix ref mismatch]
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-02-28 22:04:53 +00:00
|
|
|
percpu_ref_put(&ctx->refs);
|
|
|
|
}
|
|
|
|
|
2024-02-02 17:20:05 +00:00
|
|
|
/*
|
|
|
|
* Run queued task_work, returning the number of entries processed in *count.
|
|
|
|
* If more entries than max_entries are available, stop processing once this
|
|
|
|
* is reached and return the rest of the list.
|
|
|
|
*/
|
|
|
|
struct llist_node *io_handle_tw_list(struct llist_node *node,
|
|
|
|
unsigned int *count,
|
|
|
|
unsigned int max_entries)
|
2021-12-07 09:39:49 +00:00
|
|
|
{
|
2024-02-02 17:06:38 +00:00
|
|
|
struct io_ring_ctx *ctx = NULL;
|
|
|
|
struct io_tw_state ts = { };
|
2022-06-22 13:40:28 +00:00
|
|
|
|
2024-01-30 14:00:47 +00:00
|
|
|
do {
|
2022-06-22 13:40:23 +00:00
|
|
|
struct llist_node *next = node->next;
|
2021-12-07 09:39:49 +00:00
|
|
|
struct io_kiocb *req = container_of(node, struct io_kiocb,
|
|
|
|
io_task_work.node);
|
|
|
|
|
2024-02-02 17:06:38 +00:00
|
|
|
if (req->ctx != ctx) {
|
|
|
|
ctx_flush_and_put(ctx, &ts);
|
|
|
|
ctx = req->ctx;
|
2024-03-18 22:00:29 +00:00
|
|
|
mutex_lock(&ctx->uring_lock);
|
2024-02-02 17:06:38 +00:00
|
|
|
percpu_ref_get(&ctx->refs);
|
2023-03-27 15:38:14 +00:00
|
|
|
}
|
2023-06-02 14:41:46 +00:00
|
|
|
INDIRECT_CALL_2(req->io_task_work.func,
|
|
|
|
io_poll_task_func, io_req_rw_complete,
|
2024-02-02 17:06:38 +00:00
|
|
|
req, &ts);
|
2021-12-07 09:39:49 +00:00
|
|
|
node = next;
|
2024-02-02 21:25:27 +00:00
|
|
|
(*count)++;
|
2023-01-27 16:50:31 +00:00
|
|
|
if (unlikely(need_resched())) {
|
2024-02-02 17:06:38 +00:00
|
|
|
ctx_flush_and_put(ctx, &ts);
|
|
|
|
ctx = NULL;
|
2023-01-27 16:50:31 +00:00
|
|
|
cond_resched();
|
|
|
|
}
|
2024-02-02 17:20:05 +00:00
|
|
|
} while (node && *count < max_entries);
|
2022-06-22 13:40:28 +00:00
|
|
|
|
2024-02-02 17:06:38 +00:00
|
|
|
ctx_flush_and_put(ctx, &ts);
|
2024-02-02 17:20:05 +00:00
|
|
|
return node;
|
2021-12-07 09:39:49 +00:00
|
|
|
}
|
|
|
|
|
2022-06-22 13:40:24 +00:00
|
|
|
/**
|
|
|
|
* io_llist_xchg - swap all entries in a lock-less list
|
|
|
|
* @head: the head of lock-less list to delete all entries
|
|
|
|
* @new: new entry as the head of the list
|
|
|
|
*
|
|
|
|
* If list is empty, return NULL, otherwise, return the pointer to the first entry.
|
|
|
|
* The order of entries returned is from the newest to the oldest added one.
|
|
|
|
*/
|
|
|
|
static inline struct llist_node *io_llist_xchg(struct llist_head *head,
|
|
|
|
struct llist_node *new)
|
|
|
|
{
|
|
|
|
return xchg(&head->first, new);
|
|
|
|
}
|
|
|
|
|
2023-06-28 17:06:05 +00:00
|
|
|
static __cold void io_fallback_tw(struct io_uring_task *tctx, bool sync)
|
2023-06-27 17:57:53 +00:00
|
|
|
{
|
|
|
|
struct llist_node *node = llist_del_all(&tctx->task_list);
|
2023-06-28 17:06:05 +00:00
|
|
|
struct io_ring_ctx *last_ctx = NULL;
|
2023-06-27 17:57:53 +00:00
|
|
|
struct io_kiocb *req;
|
|
|
|
|
|
|
|
while (node) {
|
|
|
|
req = container_of(node, struct io_kiocb, io_task_work.node);
|
|
|
|
node = node->next;
|
2023-06-28 17:06:05 +00:00
|
|
|
if (sync && last_ctx != req->ctx) {
|
|
|
|
if (last_ctx) {
|
|
|
|
flush_delayed_work(&last_ctx->fallback_work);
|
|
|
|
percpu_ref_put(&last_ctx->refs);
|
|
|
|
}
|
|
|
|
last_ctx = req->ctx;
|
|
|
|
percpu_ref_get(&last_ctx->refs);
|
|
|
|
}
|
2023-06-27 17:57:53 +00:00
|
|
|
if (llist_add(&req->io_task_work.node,
|
|
|
|
&req->ctx->fallback_llist))
|
|
|
|
schedule_delayed_work(&req->ctx->fallback_work, 1);
|
|
|
|
}
|
2023-06-28 17:06:05 +00:00
|
|
|
|
|
|
|
if (last_ctx) {
|
|
|
|
flush_delayed_work(&last_ctx->fallback_work);
|
|
|
|
percpu_ref_put(&last_ctx->refs);
|
|
|
|
}
|
2023-06-27 17:57:53 +00:00
|
|
|
}
|
|
|
|
|
2024-02-02 17:20:05 +00:00
|
|
|
struct llist_node *tctx_task_work_run(struct io_uring_task *tctx,
|
|
|
|
unsigned int max_entries,
|
|
|
|
unsigned int *count)
|
io_uring: use task_work for links if possible
Currently links are always done in an async fashion, unless we catch them
inline after we successfully complete a request without having to resort
to blocking. This isn't necessarily the most efficient approach, it'd be
more ideal if we could just use the task_work handling for this.
Outside of saving an async jump, we can also do less prep work for these
kinds of requests.
Running dependent links from the task_work handler yields some nice
performance benefits. As an example, examples/link-cp from the liburing
repository uses read+write links to implement a copy operation. Without
this patch, the a cache fold 4G file read from a VM runs in about 3
seconds:
$ time examples/link-cp /data/file /dev/null
real 0m2.986s
user 0m0.051s
sys 0m2.843s
and a subsequent cache hot run looks like this:
$ time examples/link-cp /data/file /dev/null
real 0m0.898s
user 0m0.069s
sys 0m0.797s
With this patch in place, the cold case takes about 2.4 seconds:
$ time examples/link-cp /data/file /dev/null
real 0m2.400s
user 0m0.020s
sys 0m2.366s
and the cache hot case looks like this:
$ time examples/link-cp /data/file /dev/null
real 0m0.676s
user 0m0.010s
sys 0m0.665s
As expected, the (mostly) cache hot case yields the biggest improvement,
running about 25% faster with this change, while the cache cold case
yields about a 20% increase in performance. Outside of the performance
increase, we're using less CPU as well, as we're not using the async
offload threads at all for this anymore.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-06-25 21:39:59 +00:00
|
|
|
{
|
2022-12-07 03:53:33 +00:00
|
|
|
struct llist_node *node;
|
2022-06-22 13:40:25 +00:00
|
|
|
|
2022-12-07 03:53:33 +00:00
|
|
|
if (unlikely(current->flags & PF_EXITING)) {
|
2023-06-28 17:06:05 +00:00
|
|
|
io_fallback_tw(tctx, true);
|
2024-02-02 17:20:05 +00:00
|
|
|
return NULL;
|
2022-12-07 03:53:33 +00:00
|
|
|
}
|
2022-06-22 13:40:25 +00:00
|
|
|
|
2024-01-30 14:00:47 +00:00
|
|
|
node = llist_del_all(&tctx->task_list);
|
2024-02-02 17:20:05 +00:00
|
|
|
if (node) {
|
|
|
|
node = llist_reverse_order(node);
|
|
|
|
node = io_handle_tw_list(node, count, max_entries);
|
|
|
|
}
|
2022-01-09 00:53:22 +00:00
|
|
|
|
2023-02-17 15:27:23 +00:00
|
|
|
/* relaxed read is enough as only the task itself sets ->in_cancel */
|
|
|
|
if (unlikely(atomic_read(&tctx->in_cancel)))
|
2022-01-09 00:53:22 +00:00
|
|
|
io_uring_drop_tctx_refs(current);
|
2022-06-22 13:40:28 +00:00
|
|
|
|
2024-02-02 17:20:05 +00:00
|
|
|
trace_io_uring_task_work_run(tctx, *count);
|
|
|
|
return node;
|
|
|
|
}
|
|
|
|
|
|
|
|
void tctx_task_work(struct callback_head *cb)
|
|
|
|
{
|
|
|
|
struct io_uring_task *tctx;
|
|
|
|
struct llist_node *ret;
|
|
|
|
unsigned int count = 0;
|
|
|
|
|
|
|
|
tctx = container_of(cb, struct io_uring_task, task_work);
|
|
|
|
ret = tctx_task_work_run(tctx, UINT_MAX, &count);
|
|
|
|
/* can't happen */
|
|
|
|
WARN_ON_ONCE(ret);
|
2021-02-10 00:03:20 +00:00
|
|
|
}
|
|
|
|
|
2024-03-28 18:38:44 +00:00
|
|
|
static inline void io_req_local_work_add(struct io_kiocb *req,
|
|
|
|
struct io_ring_ctx *ctx,
|
|
|
|
unsigned flags)
|
2022-08-30 12:50:10 +00:00
|
|
|
{
|
2023-04-06 13:20:12 +00:00
|
|
|
unsigned nr_wait, nr_tw, nr_tw_prev;
|
2024-01-17 00:57:28 +00:00
|
|
|
struct llist_node *head;
|
2022-08-30 12:50:10 +00:00
|
|
|
|
2024-01-17 00:57:29 +00:00
|
|
|
/* See comment above IO_CQ_WAKE_INIT */
|
|
|
|
BUILD_BUG_ON(IO_CQ_WAKE_FORCE <= IORING_MAX_CQ_ENTRIES);
|
2022-08-30 12:50:10 +00:00
|
|
|
|
2024-01-17 00:57:29 +00:00
|
|
|
/*
|
|
|
|
* We don't know how many reuqests is there in the link and whether
|
|
|
|
* they can even be queued lazily, fall back to non-lazy.
|
|
|
|
*/
|
2023-04-06 13:20:12 +00:00
|
|
|
if (req->flags & (REQ_F_LINK | REQ_F_HARDLINK))
|
|
|
|
flags &= ~IOU_F_TWQ_LAZY_WAKE;
|
2023-02-17 15:22:17 +00:00
|
|
|
|
2024-03-28 18:38:44 +00:00
|
|
|
guard(rcu)();
|
|
|
|
|
2024-01-17 00:57:28 +00:00
|
|
|
head = READ_ONCE(ctx->work_llist.first);
|
2023-04-06 13:20:11 +00:00
|
|
|
do {
|
2023-04-06 13:20:12 +00:00
|
|
|
nr_tw_prev = 0;
|
2024-01-17 00:57:28 +00:00
|
|
|
if (head) {
|
|
|
|
struct io_kiocb *first_req = container_of(head,
|
2023-04-06 13:20:12 +00:00
|
|
|
struct io_kiocb,
|
|
|
|
io_task_work.node);
|
|
|
|
/*
|
|
|
|
* Might be executed at any moment, rely on
|
|
|
|
* SLAB_TYPESAFE_BY_RCU to keep it alive.
|
|
|
|
*/
|
|
|
|
nr_tw_prev = READ_ONCE(first_req->nr_tw);
|
|
|
|
}
|
2024-01-17 00:57:29 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Theoretically, it can overflow, but that's fine as one of
|
|
|
|
* previous adds should've tried to wake the task.
|
|
|
|
*/
|
2023-04-06 13:20:12 +00:00
|
|
|
nr_tw = nr_tw_prev + 1;
|
|
|
|
if (!(flags & IOU_F_TWQ_LAZY_WAKE))
|
2024-01-17 00:57:29 +00:00
|
|
|
nr_tw = IO_CQ_WAKE_FORCE;
|
2023-04-06 13:20:12 +00:00
|
|
|
|
|
|
|
req->nr_tw = nr_tw;
|
2024-01-17 00:57:28 +00:00
|
|
|
req->io_task_work.node.next = head;
|
|
|
|
} while (!try_cmpxchg(&ctx->work_llist.first, &head,
|
2023-04-06 13:20:11 +00:00
|
|
|
&req->io_task_work.node));
|
|
|
|
|
2024-01-17 00:57:27 +00:00
|
|
|
/*
|
|
|
|
* cmpxchg implies a full barrier, which pairs with the barrier
|
|
|
|
* in set_current_state() on the io_cqring_wait() side. It's used
|
|
|
|
* to ensure that either we see updated ->cq_wait_nr, or waiters
|
|
|
|
* going to sleep will observe the work added to the list, which
|
|
|
|
* is similar to the wait/wawke task state sync.
|
|
|
|
*/
|
|
|
|
|
2024-01-17 00:57:28 +00:00
|
|
|
if (!head) {
|
2023-04-06 13:20:12 +00:00
|
|
|
if (ctx->flags & IORING_SETUP_TASKRUN_FLAG)
|
|
|
|
atomic_or(IORING_SQ_TASKRUN, &ctx->rings->sq_flags);
|
|
|
|
if (ctx->has_evfd)
|
|
|
|
io_eventfd_signal(ctx);
|
2022-08-30 12:50:10 +00:00
|
|
|
}
|
|
|
|
|
2023-04-06 13:20:12 +00:00
|
|
|
nr_wait = atomic_read(&ctx->cq_wait_nr);
|
2024-01-17 00:57:29 +00:00
|
|
|
/* not enough or no one is waiting */
|
|
|
|
if (nr_tw < nr_wait)
|
2023-04-06 13:20:12 +00:00
|
|
|
return;
|
2024-01-17 00:57:29 +00:00
|
|
|
/* the previous add has already woken it up */
|
|
|
|
if (nr_tw_prev >= nr_wait)
|
2023-04-06 13:20:12 +00:00
|
|
|
return;
|
|
|
|
wake_up_state(ctx->submitter_task, TASK_INTERRUPTIBLE);
|
2022-08-30 12:50:10 +00:00
|
|
|
}
|
|
|
|
|
2023-06-23 11:23:26 +00:00
|
|
|
static void io_req_normal_work_add(struct io_kiocb *req)
|
2021-02-10 00:03:20 +00:00
|
|
|
{
|
2022-06-22 13:40:22 +00:00
|
|
|
struct io_uring_task *tctx = req->task->io_uring;
|
2022-04-26 01:49:02 +00:00
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
2021-02-10 00:03:20 +00:00
|
|
|
|
|
|
|
/* task_work already pending, we're done */
|
2022-08-30 12:50:07 +00:00
|
|
|
if (!llist_add(&req->io_task_work.node, &tctx->task_list))
|
2021-07-01 12:26:05 +00:00
|
|
|
return;
|
2021-02-10 00:03:20 +00:00
|
|
|
|
2022-04-26 01:49:04 +00:00
|
|
|
if (ctx->flags & IORING_SETUP_TASKRUN_FLAG)
|
|
|
|
atomic_or(IORING_SQ_TASKRUN, &ctx->rings->sq_flags);
|
|
|
|
|
2024-02-02 17:20:05 +00:00
|
|
|
/* SQPOLL doesn't need the task_work added, it'll run it itself */
|
2024-02-14 20:56:08 +00:00
|
|
|
if (ctx->flags & IORING_SETUP_SQPOLL) {
|
|
|
|
struct io_sq_data *sqd = ctx->sq_data;
|
|
|
|
|
2024-06-25 01:07:18 +00:00
|
|
|
if (sqd->thread)
|
|
|
|
__set_notify_signal(sqd->thread);
|
2024-02-02 17:20:05 +00:00
|
|
|
return;
|
2024-02-14 20:56:08 +00:00
|
|
|
}
|
2024-02-02 17:20:05 +00:00
|
|
|
|
2022-05-21 15:17:05 +00:00
|
|
|
if (likely(!task_work_add(req->task, &tctx->task_work, ctx->notify_method)))
|
2021-07-01 12:26:05 +00:00
|
|
|
return;
|
2021-08-09 12:04:06 +00:00
|
|
|
|
2023-06-28 17:06:05 +00:00
|
|
|
io_fallback_tw(tctx, false);
|
2022-08-30 12:50:10 +00:00
|
|
|
}
|
|
|
|
|
2023-06-23 11:23:26 +00:00
|
|
|
void __io_req_task_work_add(struct io_kiocb *req, unsigned flags)
|
|
|
|
{
|
2024-03-28 18:38:44 +00:00
|
|
|
if (req->ctx->flags & IORING_SETUP_DEFER_TASKRUN)
|
|
|
|
io_req_local_work_add(req, req->ctx, flags);
|
|
|
|
else
|
2023-06-23 11:23:26 +00:00
|
|
|
io_req_normal_work_add(req);
|
2024-03-28 18:38:44 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void io_req_task_work_add_remote(struct io_kiocb *req, struct io_ring_ctx *ctx,
|
|
|
|
unsigned flags)
|
|
|
|
{
|
|
|
|
if (WARN_ON_ONCE(!(ctx->flags & IORING_SETUP_DEFER_TASKRUN)))
|
|
|
|
return;
|
|
|
|
io_req_local_work_add(req, ctx, flags);
|
2023-06-23 11:23:26 +00:00
|
|
|
}
|
|
|
|
|
2022-08-30 12:50:10 +00:00
|
|
|
static void __cold io_move_task_work_from_local(struct io_ring_ctx *ctx)
|
|
|
|
{
|
|
|
|
struct llist_node *node;
|
|
|
|
|
|
|
|
node = llist_del_all(&ctx->work_llist);
|
|
|
|
while (node) {
|
|
|
|
struct io_kiocb *req = container_of(node, struct io_kiocb,
|
|
|
|
io_task_work.node);
|
|
|
|
|
|
|
|
node = node->next;
|
2023-06-23 11:23:26 +00:00
|
|
|
io_req_normal_work_add(req);
|
2022-08-30 12:50:10 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2024-01-31 17:50:08 +00:00
|
|
|
static bool io_run_local_work_continue(struct io_ring_ctx *ctx, int events,
|
|
|
|
int min_events)
|
|
|
|
{
|
|
|
|
if (llist_empty(&ctx->work_llist))
|
|
|
|
return false;
|
|
|
|
if (events < min_events)
|
|
|
|
return true;
|
|
|
|
if (ctx->flags & IORING_SETUP_TASKRUN_FLAG)
|
|
|
|
atomic_or(IORING_SQ_TASKRUN, &ctx->rings->sq_flags);
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int __io_run_local_work(struct io_ring_ctx *ctx, struct io_tw_state *ts,
|
|
|
|
int min_events)
|
2022-08-30 12:50:10 +00:00
|
|
|
{
|
|
|
|
struct llist_node *node;
|
2023-01-09 14:46:13 +00:00
|
|
|
unsigned int loops = 0;
|
2023-01-05 11:22:23 +00:00
|
|
|
int ret = 0;
|
2022-08-30 12:50:10 +00:00
|
|
|
|
2023-01-05 11:22:23 +00:00
|
|
|
if (WARN_ON_ONCE(ctx->submitter_task != current))
|
2022-08-30 12:50:10 +00:00
|
|
|
return -EEXIST;
|
2023-01-09 14:46:13 +00:00
|
|
|
if (ctx->flags & IORING_SETUP_TASKRUN_FLAG)
|
|
|
|
atomic_andnot(IORING_SQ_TASKRUN, &ctx->rings->sq_flags);
|
2022-08-30 12:50:10 +00:00
|
|
|
again:
|
2023-05-19 15:51:31 +00:00
|
|
|
/*
|
|
|
|
* llists are in reverse order, flip it back the right way before
|
|
|
|
* running the pending items.
|
|
|
|
*/
|
|
|
|
node = llist_reverse_order(io_llist_xchg(&ctx->work_llist, NULL));
|
2023-01-09 14:46:13 +00:00
|
|
|
while (node) {
|
2022-08-30 12:50:10 +00:00
|
|
|
struct llist_node *next = node->next;
|
|
|
|
struct io_kiocb *req = container_of(node, struct io_kiocb,
|
|
|
|
io_task_work.node);
|
2023-06-02 14:41:46 +00:00
|
|
|
INDIRECT_CALL_2(req->io_task_work.func,
|
|
|
|
io_poll_task_func, io_req_rw_complete,
|
|
|
|
req, ts);
|
2022-08-30 12:50:10 +00:00
|
|
|
ret++;
|
|
|
|
node = next;
|
|
|
|
}
|
2023-01-09 14:46:13 +00:00
|
|
|
loops++;
|
2022-08-30 12:50:10 +00:00
|
|
|
|
2024-01-31 17:50:08 +00:00
|
|
|
if (io_run_local_work_continue(ctx, ret, min_events))
|
2022-08-30 12:50:10 +00:00
|
|
|
goto again;
|
2024-03-18 22:00:29 +00:00
|
|
|
io_submit_flush_completions(ctx);
|
|
|
|
if (io_run_local_work_continue(ctx, ret, min_events))
|
|
|
|
goto again;
|
2024-01-31 17:50:08 +00:00
|
|
|
|
2022-08-30 12:50:13 +00:00
|
|
|
trace_io_uring_local_work_run(ctx, ret, loops);
|
2022-08-30 12:50:10 +00:00
|
|
|
return ret;
|
2022-09-03 16:09:22 +00:00
|
|
|
}
|
|
|
|
|
2024-01-31 17:50:08 +00:00
|
|
|
static inline int io_run_local_work_locked(struct io_ring_ctx *ctx,
|
|
|
|
int min_events)
|
2023-01-09 14:46:07 +00:00
|
|
|
{
|
2024-03-18 22:00:30 +00:00
|
|
|
struct io_tw_state ts = {};
|
2023-01-09 14:46:07 +00:00
|
|
|
|
|
|
|
if (llist_empty(&ctx->work_llist))
|
|
|
|
return 0;
|
2024-03-18 22:00:30 +00:00
|
|
|
return __io_run_local_work(ctx, &ts, min_events);
|
2023-01-09 14:46:07 +00:00
|
|
|
}
|
|
|
|
|
2024-01-31 17:50:08 +00:00
|
|
|
static int io_run_local_work(struct io_ring_ctx *ctx, int min_events)
|
2022-09-03 16:09:22 +00:00
|
|
|
{
|
2023-03-27 15:38:15 +00:00
|
|
|
struct io_tw_state ts = {};
|
2022-09-03 16:09:22 +00:00
|
|
|
int ret;
|
|
|
|
|
2024-03-18 22:00:29 +00:00
|
|
|
mutex_lock(&ctx->uring_lock);
|
2024-01-31 17:50:08 +00:00
|
|
|
ret = __io_run_local_work(ctx, &ts, min_events);
|
2024-03-18 22:00:29 +00:00
|
|
|
mutex_unlock(&ctx->uring_lock);
|
2022-09-03 16:09:22 +00:00
|
|
|
return ret;
|
2022-08-30 12:50:10 +00:00
|
|
|
}
|
|
|
|
|
2023-03-27 15:38:15 +00:00
|
|
|
static void io_req_task_cancel(struct io_kiocb *req, struct io_tw_state *ts)
|
io_uring: use task_work for links if possible
Currently links are always done in an async fashion, unless we catch them
inline after we successfully complete a request without having to resort
to blocking. This isn't necessarily the most efficient approach, it'd be
more ideal if we could just use the task_work handling for this.
Outside of saving an async jump, we can also do less prep work for these
kinds of requests.
Running dependent links from the task_work handler yields some nice
performance benefits. As an example, examples/link-cp from the liburing
repository uses read+write links to implement a copy operation. Without
this patch, the a cache fold 4G file read from a VM runs in about 3
seconds:
$ time examples/link-cp /data/file /dev/null
real 0m2.986s
user 0m0.051s
sys 0m2.843s
and a subsequent cache hot run looks like this:
$ time examples/link-cp /data/file /dev/null
real 0m0.898s
user 0m0.069s
sys 0m0.797s
With this patch in place, the cold case takes about 2.4 seconds:
$ time examples/link-cp /data/file /dev/null
real 0m2.400s
user 0m0.020s
sys 0m2.366s
and the cache hot case looks like this:
$ time examples/link-cp /data/file /dev/null
real 0m0.676s
user 0m0.010s
sys 0m0.665s
As expected, the (mostly) cache hot case yields the biggest improvement,
running about 25% faster with this change, while the cache cold case
yields about a 20% increase in performance. Outside of the performance
increase, we're using less CPU as well, as we're not using the async
offload threads at all for this anymore.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-06-25 21:39:59 +00:00
|
|
|
{
|
2023-03-27 15:38:15 +00:00
|
|
|
io_tw_lock(req->ctx, ts);
|
2022-11-24 09:35:53 +00:00
|
|
|
io_req_defer_failed(req, req->cqe.res);
|
io_uring: use task_work for links if possible
Currently links are always done in an async fashion, unless we catch them
inline after we successfully complete a request without having to resort
to blocking. This isn't necessarily the most efficient approach, it'd be
more ideal if we could just use the task_work handling for this.
Outside of saving an async jump, we can also do less prep work for these
kinds of requests.
Running dependent links from the task_work handler yields some nice
performance benefits. As an example, examples/link-cp from the liburing
repository uses read+write links to implement a copy operation. Without
this patch, the a cache fold 4G file read from a VM runs in about 3
seconds:
$ time examples/link-cp /data/file /dev/null
real 0m2.986s
user 0m0.051s
sys 0m2.843s
and a subsequent cache hot run looks like this:
$ time examples/link-cp /data/file /dev/null
real 0m0.898s
user 0m0.069s
sys 0m0.797s
With this patch in place, the cold case takes about 2.4 seconds:
$ time examples/link-cp /data/file /dev/null
real 0m2.400s
user 0m0.020s
sys 0m2.366s
and the cache hot case looks like this:
$ time examples/link-cp /data/file /dev/null
real 0m0.676s
user 0m0.010s
sys 0m0.665s
As expected, the (mostly) cache hot case yields the biggest improvement,
running about 25% faster with this change, while the cache cold case
yields about a 20% increase in performance. Outside of the performance
increase, we're using less CPU as well, as we're not using the async
offload threads at all for this anymore.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-06-25 21:39:59 +00:00
|
|
|
}
|
|
|
|
|
2023-03-27 15:38:15 +00:00
|
|
|
void io_req_task_submit(struct io_kiocb *req, struct io_tw_state *ts)
|
io_uring: use task_work for links if possible
Currently links are always done in an async fashion, unless we catch them
inline after we successfully complete a request without having to resort
to blocking. This isn't necessarily the most efficient approach, it'd be
more ideal if we could just use the task_work handling for this.
Outside of saving an async jump, we can also do less prep work for these
kinds of requests.
Running dependent links from the task_work handler yields some nice
performance benefits. As an example, examples/link-cp from the liburing
repository uses read+write links to implement a copy operation. Without
this patch, the a cache fold 4G file read from a VM runs in about 3
seconds:
$ time examples/link-cp /data/file /dev/null
real 0m2.986s
user 0m0.051s
sys 0m2.843s
and a subsequent cache hot run looks like this:
$ time examples/link-cp /data/file /dev/null
real 0m0.898s
user 0m0.069s
sys 0m0.797s
With this patch in place, the cold case takes about 2.4 seconds:
$ time examples/link-cp /data/file /dev/null
real 0m2.400s
user 0m0.020s
sys 0m2.366s
and the cache hot case looks like this:
$ time examples/link-cp /data/file /dev/null
real 0m0.676s
user 0m0.010s
sys 0m0.665s
As expected, the (mostly) cache hot case yields the biggest improvement,
running about 25% faster with this change, while the cache cold case
yields about a 20% increase in performance. Outside of the performance
increase, we're using less CPU as well, as we're not using the async
offload threads at all for this anymore.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-06-25 21:39:59 +00:00
|
|
|
{
|
2023-03-27 15:38:15 +00:00
|
|
|
io_tw_lock(req->ctx, ts);
|
2021-08-19 15:41:42 +00:00
|
|
|
/* req->task == current here, checking PF_EXITING is safe */
|
2023-01-27 13:52:24 +00:00
|
|
|
if (unlikely(req->task->flags & PF_EXITING))
|
2022-11-24 09:35:53 +00:00
|
|
|
io_req_defer_failed(req, -EFAULT);
|
2023-01-27 13:52:24 +00:00
|
|
|
else if (req->flags & REQ_F_FORCE_ASYNC)
|
2024-03-18 22:00:28 +00:00
|
|
|
io_queue_iowq(req);
|
2023-01-27 13:52:24 +00:00
|
|
|
else
|
|
|
|
io_queue_sqe(req);
|
io_uring: use task_work for links if possible
Currently links are always done in an async fashion, unless we catch them
inline after we successfully complete a request without having to resort
to blocking. This isn't necessarily the most efficient approach, it'd be
more ideal if we could just use the task_work handling for this.
Outside of saving an async jump, we can also do less prep work for these
kinds of requests.
Running dependent links from the task_work handler yields some nice
performance benefits. As an example, examples/link-cp from the liburing
repository uses read+write links to implement a copy operation. Without
this patch, the a cache fold 4G file read from a VM runs in about 3
seconds:
$ time examples/link-cp /data/file /dev/null
real 0m2.986s
user 0m0.051s
sys 0m2.843s
and a subsequent cache hot run looks like this:
$ time examples/link-cp /data/file /dev/null
real 0m0.898s
user 0m0.069s
sys 0m0.797s
With this patch in place, the cold case takes about 2.4 seconds:
$ time examples/link-cp /data/file /dev/null
real 0m2.400s
user 0m0.020s
sys 0m2.366s
and the cache hot case looks like this:
$ time examples/link-cp /data/file /dev/null
real 0m0.676s
user 0m0.010s
sys 0m0.665s
As expected, the (mostly) cache hot case yields the biggest improvement,
running about 25% faster with this change, while the cache cold case
yields about a 20% increase in performance. Outside of the performance
increase, we're using less CPU as well, as we're not using the async
offload threads at all for this anymore.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-06-25 21:39:59 +00:00
|
|
|
}
|
|
|
|
|
2022-05-25 14:57:27 +00:00
|
|
|
void io_req_task_queue_fail(struct io_kiocb *req, int ret)
|
io_uring: use task_work for links if possible
Currently links are always done in an async fashion, unless we catch them
inline after we successfully complete a request without having to resort
to blocking. This isn't necessarily the most efficient approach, it'd be
more ideal if we could just use the task_work handling for this.
Outside of saving an async jump, we can also do less prep work for these
kinds of requests.
Running dependent links from the task_work handler yields some nice
performance benefits. As an example, examples/link-cp from the liburing
repository uses read+write links to implement a copy operation. Without
this patch, the a cache fold 4G file read from a VM runs in about 3
seconds:
$ time examples/link-cp /data/file /dev/null
real 0m2.986s
user 0m0.051s
sys 0m2.843s
and a subsequent cache hot run looks like this:
$ time examples/link-cp /data/file /dev/null
real 0m0.898s
user 0m0.069s
sys 0m0.797s
With this patch in place, the cold case takes about 2.4 seconds:
$ time examples/link-cp /data/file /dev/null
real 0m2.400s
user 0m0.020s
sys 0m2.366s
and the cache hot case looks like this:
$ time examples/link-cp /data/file /dev/null
real 0m0.676s
user 0m0.010s
sys 0m0.665s
As expected, the (mostly) cache hot case yields the biggest improvement,
running about 25% faster with this change, while the cache cold case
yields about a 20% increase in performance. Outside of the performance
increase, we're using less CPU as well, as we're not using the async
offload threads at all for this anymore.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-06-25 21:39:59 +00:00
|
|
|
{
|
2022-05-24 21:21:00 +00:00
|
|
|
io_req_set_res(req, ret, 0);
|
2021-06-30 20:54:04 +00:00
|
|
|
req->io_task_work.func = io_req_task_cancel;
|
2022-05-21 15:17:05 +00:00
|
|
|
io_req_task_work_add(req);
|
io_uring: use task_work for links if possible
Currently links are always done in an async fashion, unless we catch them
inline after we successfully complete a request without having to resort
to blocking. This isn't necessarily the most efficient approach, it'd be
more ideal if we could just use the task_work handling for this.
Outside of saving an async jump, we can also do less prep work for these
kinds of requests.
Running dependent links from the task_work handler yields some nice
performance benefits. As an example, examples/link-cp from the liburing
repository uses read+write links to implement a copy operation. Without
this patch, the a cache fold 4G file read from a VM runs in about 3
seconds:
$ time examples/link-cp /data/file /dev/null
real 0m2.986s
user 0m0.051s
sys 0m2.843s
and a subsequent cache hot run looks like this:
$ time examples/link-cp /data/file /dev/null
real 0m0.898s
user 0m0.069s
sys 0m0.797s
With this patch in place, the cold case takes about 2.4 seconds:
$ time examples/link-cp /data/file /dev/null
real 0m2.400s
user 0m0.020s
sys 0m2.366s
and the cache hot case looks like this:
$ time examples/link-cp /data/file /dev/null
real 0m0.676s
user 0m0.010s
sys 0m0.665s
As expected, the (mostly) cache hot case yields the biggest improvement,
running about 25% faster with this change, while the cache cold case
yields about a 20% increase in performance. Outside of the performance
increase, we're using less CPU as well, as we're not using the async
offload threads at all for this anymore.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-06-25 21:39:59 +00:00
|
|
|
}
|
|
|
|
|
2022-06-13 13:27:03 +00:00
|
|
|
void io_req_task_queue(struct io_kiocb *req)
|
2021-02-18 22:32:52 +00:00
|
|
|
{
|
2021-06-30 20:54:04 +00:00
|
|
|
req->io_task_work.func = io_req_task_submit;
|
2022-05-21 15:17:05 +00:00
|
|
|
io_req_task_work_add(req);
|
2021-02-18 22:32:52 +00:00
|
|
|
}
|
|
|
|
|
2022-05-25 14:57:27 +00:00
|
|
|
void io_queue_next(struct io_kiocb *req)
|
2019-11-09 03:00:08 +00:00
|
|
|
{
|
2020-06-29 10:13:00 +00:00
|
|
|
struct io_kiocb *nxt = io_req_find_next(req);
|
2019-11-21 20:21:01 +00:00
|
|
|
|
|
|
|
if (nxt)
|
2020-06-27 11:04:55 +00:00
|
|
|
io_req_task_queue(nxt);
|
2019-11-09 03:00:08 +00:00
|
|
|
}
|
|
|
|
|
2023-08-24 22:53:29 +00:00
|
|
|
static void io_free_batch_list(struct io_ring_ctx *ctx,
|
|
|
|
struct io_wq_work_node *node)
|
2021-09-24 20:59:50 +00:00
|
|
|
__must_hold(&ctx->uring_lock)
|
2020-07-18 08:32:52 +00:00
|
|
|
{
|
2021-09-24 20:59:50 +00:00
|
|
|
do {
|
|
|
|
struct io_kiocb *req = container_of(node, struct io_kiocb,
|
|
|
|
comp_list);
|
2020-06-28 09:52:33 +00:00
|
|
|
|
2022-03-21 22:02:22 +00:00
|
|
|
if (unlikely(req->flags & IO_REQ_CLEAN_SLOW_FLAGS)) {
|
|
|
|
if (req->flags & REQ_F_REFCOUNT) {
|
|
|
|
node = req->comp_list.next;
|
|
|
|
if (!req_ref_put_and_test(req))
|
|
|
|
continue;
|
|
|
|
}
|
2022-03-21 22:02:23 +00:00
|
|
|
if ((req->flags & REQ_F_POLLED) && req->apoll) {
|
|
|
|
struct async_poll *apoll = req->apoll;
|
|
|
|
|
|
|
|
if (apoll->double_poll)
|
|
|
|
kfree(apoll->double_poll);
|
2024-03-20 21:19:44 +00:00
|
|
|
if (!io_alloc_cache_put(&ctx->apoll_cache, apoll))
|
2022-07-07 20:20:54 +00:00
|
|
|
kfree(apoll);
|
2022-03-21 22:02:23 +00:00
|
|
|
req->flags &= ~REQ_F_POLLED;
|
|
|
|
}
|
2022-04-15 21:08:29 +00:00
|
|
|
if (req->flags & IO_REQ_LINK_FLAGS)
|
2022-03-21 22:02:24 +00:00
|
|
|
io_queue_next(req);
|
2022-03-21 22:02:22 +00:00
|
|
|
if (unlikely(req->flags & IO_REQ_CLEAN_FLAGS))
|
|
|
|
io_clean_op(req);
|
2021-10-04 19:02:55 +00:00
|
|
|
}
|
2023-07-07 17:14:40 +00:00
|
|
|
io_put_file(req);
|
2024-04-05 15:50:05 +00:00
|
|
|
io_put_rsrc_node(ctx, req->rsrc_node);
|
2023-06-23 11:23:25 +00:00
|
|
|
io_put_task(req->task);
|
2024-04-05 15:50:05 +00:00
|
|
|
|
2021-10-04 19:02:55 +00:00
|
|
|
node = req->comp_list.next;
|
2022-04-12 14:09:48 +00:00
|
|
|
io_req_add_to_cache(req, ctx);
|
2021-09-24 20:59:50 +00:00
|
|
|
} while (node);
|
2020-03-03 18:33:13 +00:00
|
|
|
}
|
|
|
|
|
2023-08-24 22:53:29 +00:00
|
|
|
void __io_submit_flush_completions(struct io_ring_ctx *ctx)
|
2021-08-12 18:48:34 +00:00
|
|
|
__must_hold(&ctx->uring_lock)
|
2021-02-10 00:03:14 +00:00
|
|
|
{
|
2021-08-09 19:18:11 +00:00
|
|
|
struct io_submit_state *state = &ctx->submit_state;
|
2023-03-09 16:51:13 +00:00
|
|
|
struct io_wq_work_node *node;
|
2021-02-10 00:03:14 +00:00
|
|
|
|
2022-12-07 15:50:01 +00:00
|
|
|
__io_cq_lock(ctx);
|
2023-03-09 16:51:13 +00:00
|
|
|
__wq_list_for_each(node, &state->compl_reqs) {
|
2022-06-19 11:26:08 +00:00
|
|
|
struct io_kiocb *req = container_of(node, struct io_kiocb,
|
|
|
|
comp_list);
|
2021-11-10 15:49:33 +00:00
|
|
|
|
2022-12-07 15:50:01 +00:00
|
|
|
if (!(req->flags & REQ_F_CQE_SKIP) &&
|
2023-08-11 12:53:43 +00:00
|
|
|
unlikely(!io_fill_cqe_req(ctx, req))) {
|
2023-09-07 12:50:08 +00:00
|
|
|
if (ctx->lockless_cq) {
|
2022-12-07 15:50:01 +00:00
|
|
|
spin_lock(&ctx->completion_lock);
|
|
|
|
io_req_cqe_overflow(req);
|
|
|
|
spin_unlock(&ctx->completion_lock);
|
|
|
|
} else {
|
|
|
|
io_req_cqe_overflow(req);
|
|
|
|
}
|
|
|
|
}
|
2021-02-10 00:03:14 +00:00
|
|
|
}
|
2023-06-23 11:23:31 +00:00
|
|
|
__io_cq_unlock_post(ctx);
|
2022-06-19 11:26:08 +00:00
|
|
|
|
2024-06-14 16:57:03 +00:00
|
|
|
if (!wq_list_empty(&state->compl_reqs)) {
|
2022-11-24 09:35:54 +00:00
|
|
|
io_free_batch_list(ctx, state->compl_reqs.first);
|
|
|
|
INIT_WQ_LIST(&state->compl_reqs);
|
|
|
|
}
|
2024-03-18 22:00:32 +00:00
|
|
|
ctx->submit_state.cq_flush = false;
|
2020-03-03 18:33:13 +00:00
|
|
|
}
|
|
|
|
|
2021-01-04 20:36:36 +00:00
|
|
|
static unsigned io_cqring_events(struct io_ring_ctx *ctx)
|
2019-08-20 17:03:11 +00:00
|
|
|
{
|
|
|
|
/* See comment at the top of this file */
|
|
|
|
smp_rmb();
|
2020-12-17 00:24:37 +00:00
|
|
|
return __io_cqring_events(ctx);
|
2019-08-20 17:03:11 +00:00
|
|
|
}
|
|
|
|
|
2019-01-09 15:59:42 +00:00
|
|
|
/*
|
|
|
|
* We can't just wait for polled events to come to us, we have to actively
|
|
|
|
* find and complete them.
|
|
|
|
*/
|
2021-10-04 19:02:54 +00:00
|
|
|
static __cold void io_iopoll_try_reap_events(struct io_ring_ctx *ctx)
|
2019-01-09 15:59:42 +00:00
|
|
|
{
|
|
|
|
if (!(ctx->flags & IORING_SETUP_IOPOLL))
|
|
|
|
return;
|
|
|
|
|
|
|
|
mutex_lock(&ctx->uring_lock);
|
2021-09-24 20:59:49 +00:00
|
|
|
while (!wq_list_empty(&ctx->iopoll_list)) {
|
2020-07-07 13:36:22 +00:00
|
|
|
/* let it sleep and repeat later if can't complete a request */
|
2021-09-24 20:59:43 +00:00
|
|
|
if (io_do_iopoll(ctx, true) == 0)
|
2020-07-07 13:36:22 +00:00
|
|
|
break;
|
2019-08-22 04:19:11 +00:00
|
|
|
/*
|
|
|
|
* Ensure we allow local-to-the-cpu processing to take place,
|
|
|
|
* in this case we need to ensure that we reap all events.
|
2020-07-06 14:59:31 +00:00
|
|
|
* Also let task_work, etc. to progress by releasing the mutex
|
2019-08-22 04:19:11 +00:00
|
|
|
*/
|
2020-07-06 14:59:31 +00:00
|
|
|
if (need_resched()) {
|
|
|
|
mutex_unlock(&ctx->uring_lock);
|
|
|
|
cond_resched();
|
|
|
|
mutex_lock(&ctx->uring_lock);
|
|
|
|
}
|
2019-01-09 15:59:42 +00:00
|
|
|
}
|
|
|
|
mutex_unlock(&ctx->uring_lock);
|
|
|
|
}
|
|
|
|
|
2020-07-07 13:36:21 +00:00
|
|
|
static int io_iopoll_check(struct io_ring_ctx *ctx, long min)
|
2019-01-09 15:59:42 +00:00
|
|
|
{
|
2020-07-07 13:36:21 +00:00
|
|
|
unsigned int nr_events = 0;
|
2022-04-21 09:13:44 +00:00
|
|
|
unsigned long check_cq;
|
2019-08-19 18:15:59 +00:00
|
|
|
|
2024-04-10 01:26:54 +00:00
|
|
|
lockdep_assert_held(&ctx->uring_lock);
|
|
|
|
|
2022-09-08 15:56:52 +00:00
|
|
|
if (!io_allowed_run_tw(ctx))
|
|
|
|
return -EEXIST;
|
|
|
|
|
2022-06-15 16:33:55 +00:00
|
|
|
check_cq = READ_ONCE(ctx->check_cq);
|
|
|
|
if (unlikely(check_cq)) {
|
|
|
|
if (check_cq & BIT(IO_CHECK_CQ_OVERFLOW_BIT))
|
2024-04-10 01:26:55 +00:00
|
|
|
__io_cqring_overflow_flush(ctx, false);
|
2022-06-15 16:33:55 +00:00
|
|
|
/*
|
|
|
|
* Similarly do not spin if we have not informed the user of any
|
|
|
|
* dropped CQE.
|
|
|
|
*/
|
|
|
|
if (check_cq & BIT(IO_CHECK_CQ_DROPPED_BIT))
|
|
|
|
return -EBADR;
|
|
|
|
}
|
2021-04-13 01:58:46 +00:00
|
|
|
/*
|
|
|
|
* Don't enter poll loop if we already have events pending.
|
|
|
|
* If we do, we can potentially be spinning for commands that
|
|
|
|
* already triggered a CQE (eg in error).
|
|
|
|
*/
|
|
|
|
if (io_cqring_events(ctx))
|
2022-03-22 14:07:58 +00:00
|
|
|
return 0;
|
2022-04-21 09:13:44 +00:00
|
|
|
|
2019-01-09 15:59:42 +00:00
|
|
|
do {
|
2023-08-09 16:03:00 +00:00
|
|
|
int ret = 0;
|
|
|
|
|
2019-08-19 18:15:59 +00:00
|
|
|
/*
|
|
|
|
* If a submit got punted to a workqueue, we can have the
|
|
|
|
* application entering polling for a command before it gets
|
|
|
|
* issued. That app will hold the uring_lock for the duration
|
|
|
|
* of the poll right here, so we need to take a breather every
|
|
|
|
* now and then to ensure that the issue has a chance to add
|
|
|
|
* the poll to the issued list. Otherwise we can spin here
|
|
|
|
* forever, while the workqueue is stuck trying to acquire the
|
|
|
|
* very same mutex.
|
|
|
|
*/
|
2022-09-03 15:52:01 +00:00
|
|
|
if (wq_list_empty(&ctx->iopoll_list) ||
|
|
|
|
io_task_work_pending(ctx)) {
|
2021-07-08 12:37:06 +00:00
|
|
|
u32 tail = ctx->cached_cq_tail;
|
|
|
|
|
2024-01-31 17:50:08 +00:00
|
|
|
(void) io_run_local_work_locked(ctx, min);
|
2019-01-09 15:59:42 +00:00
|
|
|
|
2022-09-03 15:52:01 +00:00
|
|
|
if (task_work_pending(current) ||
|
|
|
|
wq_list_empty(&ctx->iopoll_list)) {
|
|
|
|
mutex_unlock(&ctx->uring_lock);
|
2022-09-08 15:56:54 +00:00
|
|
|
io_run_task_work();
|
2022-09-03 15:52:01 +00:00
|
|
|
mutex_lock(&ctx->uring_lock);
|
|
|
|
}
|
2021-07-08 12:37:06 +00:00
|
|
|
/* some requests don't go through iopoll_list */
|
|
|
|
if (tail != ctx->cached_cq_tail ||
|
2021-09-24 20:59:49 +00:00
|
|
|
wq_list_empty(&ctx->iopoll_list))
|
2021-04-13 01:58:45 +00:00
|
|
|
break;
|
2019-08-19 18:15:59 +00:00
|
|
|
}
|
2021-09-24 20:59:43 +00:00
|
|
|
ret = io_do_iopoll(ctx, !min);
|
2023-08-09 16:03:00 +00:00
|
|
|
if (unlikely(ret < 0))
|
|
|
|
return ret;
|
2023-08-09 15:20:21 +00:00
|
|
|
|
|
|
|
if (task_sigpending(current))
|
|
|
|
return -EINTR;
|
2023-08-09 16:03:00 +00:00
|
|
|
if (need_resched())
|
2021-09-24 20:59:43 +00:00
|
|
|
break;
|
2022-03-22 14:07:58 +00:00
|
|
|
|
2021-09-24 20:59:43 +00:00
|
|
|
nr_events += ret;
|
2023-08-09 16:03:00 +00:00
|
|
|
} while (nr_events < min);
|
2022-03-22 14:07:58 +00:00
|
|
|
|
2023-08-09 16:03:00 +00:00
|
|
|
return 0;
|
2019-01-09 15:59:42 +00:00
|
|
|
}
|
2022-06-16 09:21:59 +00:00
|
|
|
|
2023-03-27 15:38:15 +00:00
|
|
|
void io_req_task_complete(struct io_kiocb *req, struct io_tw_state *ts)
|
2021-08-10 21:15:25 +00:00
|
|
|
{
|
2024-03-18 22:00:30 +00:00
|
|
|
io_req_complete_defer(req);
|
2021-08-10 21:15:25 +00:00
|
|
|
}
|
|
|
|
|
2019-01-09 15:59:42 +00:00
|
|
|
/*
|
|
|
|
* After the iocb has been issued, it's safe to be found on the poll list.
|
|
|
|
* Adding the kiocb to the list AFTER submission ensures that we don't
|
2021-04-13 01:58:46 +00:00
|
|
|
* find it from a io_do_iopoll() thread before the issuer is done
|
2019-01-09 15:59:42 +00:00
|
|
|
* accessing the kiocb cookie.
|
|
|
|
*/
|
2021-10-15 16:09:12 +00:00
|
|
|
static void io_iopoll_req_issued(struct io_kiocb *req, unsigned int issue_flags)
|
2019-01-09 15:59:42 +00:00
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
2021-10-18 13:34:31 +00:00
|
|
|
const bool needs_lock = issue_flags & IO_URING_F_UNLOCKED;
|
2021-06-14 01:36:14 +00:00
|
|
|
|
|
|
|
/* workqueue context doesn't hold uring_lock, grab it now */
|
2021-10-18 13:34:31 +00:00
|
|
|
if (unlikely(needs_lock))
|
2021-06-14 01:36:14 +00:00
|
|
|
mutex_lock(&ctx->uring_lock);
|
2019-01-09 15:59:42 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Track whether we have multiple files in our lists. This will impact
|
|
|
|
* how we do polling eventually, not spinning if we're on potentially
|
|
|
|
* different devices.
|
|
|
|
*/
|
2021-09-24 20:59:49 +00:00
|
|
|
if (wq_list_empty(&ctx->iopoll_list)) {
|
2021-06-27 21:37:30 +00:00
|
|
|
ctx->poll_multi_queue = false;
|
|
|
|
} else if (!ctx->poll_multi_queue) {
|
2019-01-09 15:59:42 +00:00
|
|
|
struct io_kiocb *list_req;
|
|
|
|
|
2021-09-24 20:59:49 +00:00
|
|
|
list_req = container_of(ctx->iopoll_list.first, struct io_kiocb,
|
|
|
|
comp_list);
|
2021-10-12 11:12:14 +00:00
|
|
|
if (list_req->file != req->file)
|
2021-06-27 21:37:30 +00:00
|
|
|
ctx->poll_multi_queue = true;
|
2019-01-09 15:59:42 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* For fast devices, IO may have already completed. If it has, add
|
|
|
|
* it to the front so we find it first.
|
|
|
|
*/
|
io_uring: fix io_kiocb.flags modification race in IOPOLL mode
While testing io_uring in arm, we found sometimes io_sq_thread() keeps
polling io requests even though there are not inflight io requests in
block layer. After some investigations, found a possible race about
io_kiocb.flags, see below race codes:
1) in the end of io_write() or io_read()
req->flags &= ~REQ_F_NEED_CLEANUP;
kfree(iovec);
return ret;
2) in io_complete_rw_iopoll()
if (res != -EAGAIN)
req->flags |= REQ_F_IOPOLL_COMPLETED;
In IOPOLL mode, io requests still maybe completed by interrupt, then
above codes are not safe, concurrent modifications to req->flags, which
is not protected by lock or is not atomic modifications. I also had
disassemble io_complete_rw_iopoll() in arm:
req->flags |= REQ_F_IOPOLL_COMPLETED;
0xffff000008387b18 <+76>: ldr w0, [x19,#104]
0xffff000008387b1c <+80>: orr w0, w0, #0x1000
0xffff000008387b20 <+84>: str w0, [x19,#104]
Seems that the "req->flags |= REQ_F_IOPOLL_COMPLETED;" is load and
modification, two instructions, which obviously is not atomic.
To fix this issue, add a new iopoll_completed in io_kiocb to indicate
whether io request is completed.
Signed-off-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-06-11 15:39:36 +00:00
|
|
|
if (READ_ONCE(req->iopoll_completed))
|
2021-09-24 20:59:49 +00:00
|
|
|
wq_list_add_head(&req->comp_list, &ctx->iopoll_list);
|
2019-01-09 15:59:42 +00:00
|
|
|
else
|
2021-09-24 20:59:49 +00:00
|
|
|
wq_list_add_tail(&req->comp_list, &ctx->iopoll_list);
|
io_uring: fix poll_list race for SETUP_IOPOLL|SETUP_SQPOLL
After making ext4 support iopoll method:
let ext4_file_operations's iopoll method be iomap_dio_iopoll(),
we found fio can easily hang in fio_ioring_getevents() with below fio
job:
rm -f testfile; sync;
sudo fio -name=fiotest -filename=testfile -iodepth=128 -thread
-rw=write -ioengine=io_uring -hipri=1 -sqthread_poll=1 -direct=1
-bs=4k -size=10G -numjobs=8 -runtime=2000 -group_reporting
with IORING_SETUP_SQPOLL and IORING_SETUP_IOPOLL enabled.
There are two issues that results in this hang, one reason is that
when IORING_SETUP_SQPOLL and IORING_SETUP_IOPOLL are enabled, fio
does not use io_uring_enter to get completed events, it relies on
kernel io_sq_thread to poll for completed events.
Another reason is that there is a race: when io_submit_sqes() in
io_sq_thread() submits a batch of sqes, variable 'inflight' will
record the number of submitted reqs, then io_sq_thread will poll for
reqs which have been added to poll_list. But note, if some previous
reqs have been punted to io worker, these reqs will won't be in
poll_list timely. io_sq_thread() will only poll for a part of previous
submitted reqs, and then find poll_list is empty, reset variable
'inflight' to be zero. If app just waits these deferred reqs and does
not wake up io_sq_thread again, then hang happens.
For app that entirely relies on io_sq_thread to poll completed requests,
let io_iopoll_req_issued() wake up io_sq_thread properly when adding new
element to poll_list, and when io_sq_thread prepares to sleep, check
whether poll_list is empty again, if not empty, continue to poll.
Signed-off-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-02-25 14:12:08 +00:00
|
|
|
|
2021-10-18 13:34:31 +00:00
|
|
|
if (unlikely(needs_lock)) {
|
2021-06-14 01:36:14 +00:00
|
|
|
/*
|
|
|
|
* If IORING_SETUP_SQPOLL is enabled, sqes are either handle
|
|
|
|
* in sq thread task context or in io worker task context. If
|
|
|
|
* current task context is sq thread, we don't need to check
|
|
|
|
* whether should wake up sq thread.
|
|
|
|
*/
|
|
|
|
if ((ctx->flags & IORING_SETUP_SQPOLL) &&
|
|
|
|
wq_has_sleeper(&ctx->sq_data->wait))
|
|
|
|
wake_up(&ctx->sq_data->wait);
|
|
|
|
|
|
|
|
mutex_unlock(&ctx->uring_lock);
|
|
|
|
}
|
2019-01-09 15:59:42 +00:00
|
|
|
}
|
|
|
|
|
2024-01-29 03:05:47 +00:00
|
|
|
io_req_flags_t io_file_get_flags(struct file *file)
|
2021-10-16 23:07:10 +00:00
|
|
|
{
|
2024-01-29 03:05:47 +00:00
|
|
|
io_req_flags_t res = 0;
|
2020-04-28 19:15:06 +00:00
|
|
|
|
2023-06-20 11:32:29 +00:00
|
|
|
if (S_ISREG(file_inode(file)->i_mode))
|
2023-06-20 11:32:32 +00:00
|
|
|
res |= REQ_F_ISREG;
|
2023-06-20 11:32:28 +00:00
|
|
|
if ((file->f_flags & O_NONBLOCK) || (file->f_mode & FMODE_NOWAIT))
|
2023-06-20 11:32:32 +00:00
|
|
|
res |= REQ_F_SUPPORT_NOWAIT;
|
2021-10-16 23:07:10 +00:00
|
|
|
return res;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
}
|
|
|
|
|
2022-05-25 11:59:19 +00:00
|
|
|
bool io_alloc_async_data(struct io_kiocb *req)
|
2020-03-27 07:36:52 +00:00
|
|
|
{
|
2024-03-19 02:48:38 +00:00
|
|
|
const struct io_issue_def *def = &io_issue_defs[req->opcode];
|
|
|
|
|
|
|
|
WARN_ON_ONCE(!def->async_size);
|
|
|
|
req->async_data = kmalloc(def->async_size, GFP_KERNEL);
|
2021-10-04 19:02:56 +00:00
|
|
|
if (req->async_data) {
|
|
|
|
req->flags |= REQ_F_ASYNC_DATA;
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
return true;
|
2020-03-27 07:36:52 +00:00
|
|
|
}
|
|
|
|
|
2020-07-13 20:37:15 +00:00
|
|
|
static u32 io_get_sequence(struct io_kiocb *req)
|
|
|
|
{
|
2021-06-17 17:14:05 +00:00
|
|
|
u32 seq = req->ctx->cached_sq_head;
|
2022-03-25 11:52:16 +00:00
|
|
|
struct io_kiocb *cur;
|
2020-07-13 20:37:15 +00:00
|
|
|
|
2021-06-17 17:14:05 +00:00
|
|
|
/* need original cached_sq_head, but it was increased for each req */
|
2022-03-25 11:52:16 +00:00
|
|
|
io_for_each_link(cur, req)
|
2021-06-17 17:14:05 +00:00
|
|
|
seq--;
|
|
|
|
return seq;
|
2020-07-13 20:37:15 +00:00
|
|
|
}
|
|
|
|
|
2021-10-04 19:02:54 +00:00
|
|
|
static __cold void io_drain_req(struct io_kiocb *req)
|
2022-11-23 11:33:37 +00:00
|
|
|
__must_hold(&ctx->uring_lock)
|
2019-04-07 03:51:27 +00:00
|
|
|
{
|
2019-11-08 15:09:12 +00:00
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
2020-07-13 20:37:14 +00:00
|
|
|
struct io_defer_entry *de;
|
2019-12-02 18:03:47 +00:00
|
|
|
int ret;
|
2021-10-01 17:07:01 +00:00
|
|
|
u32 seq = io_get_sequence(req);
|
2021-06-15 15:47:57 +00:00
|
|
|
|
2019-11-13 10:06:25 +00:00
|
|
|
/* Still need defer if there is pending req in defer list. */
|
2021-11-25 09:21:02 +00:00
|
|
|
spin_lock(&ctx->completion_lock);
|
2021-09-24 21:00:04 +00:00
|
|
|
if (!req_need_defer(req, seq) && list_empty_careful(&ctx->defer_list)) {
|
2021-11-25 09:21:02 +00:00
|
|
|
spin_unlock(&ctx->completion_lock);
|
2021-10-01 17:07:01 +00:00
|
|
|
queue:
|
2021-06-15 15:47:56 +00:00
|
|
|
ctx->drain_active = false;
|
2021-10-01 17:07:01 +00:00
|
|
|
io_req_task_queue(req);
|
|
|
|
return;
|
2021-06-15 15:47:56 +00:00
|
|
|
}
|
2021-11-25 09:21:02 +00:00
|
|
|
spin_unlock(&ctx->completion_lock);
|
2020-07-13 20:37:15 +00:00
|
|
|
|
2020-06-29 16:18:43 +00:00
|
|
|
io_prep_async_link(req);
|
2020-07-13 20:37:14 +00:00
|
|
|
de = kmalloc(sizeof(*de), GFP_KERNEL);
|
2021-06-14 22:37:30 +00:00
|
|
|
if (!de) {
|
2021-07-11 21:41:13 +00:00
|
|
|
ret = -ENOMEM;
|
2023-01-27 10:59:11 +00:00
|
|
|
io_req_defer_failed(req, ret);
|
|
|
|
return;
|
2021-06-14 22:37:30 +00:00
|
|
|
}
|
2019-12-04 18:08:05 +00:00
|
|
|
|
2021-08-10 21:18:27 +00:00
|
|
|
spin_lock(&ctx->completion_lock);
|
2020-07-13 20:37:15 +00:00
|
|
|
if (!req_need_defer(req, seq) && list_empty(&ctx->defer_list)) {
|
2021-08-10 21:18:27 +00:00
|
|
|
spin_unlock(&ctx->completion_lock);
|
2020-07-13 20:37:14 +00:00
|
|
|
kfree(de);
|
2021-10-01 17:07:01 +00:00
|
|
|
goto queue;
|
2019-04-07 03:51:27 +00:00
|
|
|
}
|
|
|
|
|
2022-06-16 12:57:20 +00:00
|
|
|
trace_io_uring_defer(req);
|
2020-07-13 20:37:14 +00:00
|
|
|
de->req = req;
|
2020-07-13 20:37:15 +00:00
|
|
|
de->seq = seq;
|
2020-07-13 20:37:14 +00:00
|
|
|
list_add_tail(&de->list, &ctx->defer_list);
|
2021-08-10 21:18:27 +00:00
|
|
|
spin_unlock(&ctx->completion_lock);
|
2019-04-07 03:51:27 +00:00
|
|
|
}
|
|
|
|
|
2023-01-20 16:10:30 +00:00
|
|
|
static bool io_assign_file(struct io_kiocb *req, const struct io_issue_def *def,
|
|
|
|
unsigned int issue_flags)
|
io_uring: defer file assignment
If an application uses direct open or accept, it knows in advance what
direct descriptor value it will get as it picks it itself. This allows
combined requests such as:
sqe = io_uring_get_sqe(ring);
io_uring_prep_openat_direct(sqe, ..., file_slot);
sqe->flags |= IOSQE_IO_LINK | IOSQE_CQE_SKIP_SUCCESS;
sqe = io_uring_get_sqe(ring);
io_uring_prep_read(sqe,file_slot, buf, buf_size, 0);
sqe->flags |= IOSQE_FIXED_FILE;
io_uring_submit(ring);
where we prepare both a file open and read, and only get a completion
event for the read when both have completed successfully.
Currently links are fully prepared before the head is issued, but that
fails if the dependent link needs a file assigned that isn't valid until
the head has completed.
Conversely, if the same chain is performed but the fixed file slot is
already valid, then we would be unexpectedly returning data from the
old file slot rather than the newly opened one. Make sure we're
consistent here.
Allow deferral of file setup, which makes this documented case work.
Cc: stable@vger.kernel.org # v5.15+
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-03-29 16:10:08 +00:00
|
|
|
{
|
2023-01-20 16:10:30 +00:00
|
|
|
if (req->file || !def->needs_file)
|
io_uring: defer file assignment
If an application uses direct open or accept, it knows in advance what
direct descriptor value it will get as it picks it itself. This allows
combined requests such as:
sqe = io_uring_get_sqe(ring);
io_uring_prep_openat_direct(sqe, ..., file_slot);
sqe->flags |= IOSQE_IO_LINK | IOSQE_CQE_SKIP_SUCCESS;
sqe = io_uring_get_sqe(ring);
io_uring_prep_read(sqe,file_slot, buf, buf_size, 0);
sqe->flags |= IOSQE_FIXED_FILE;
io_uring_submit(ring);
where we prepare both a file open and read, and only get a completion
event for the read when both have completed successfully.
Currently links are fully prepared before the head is issued, but that
fails if the dependent link needs a file assigned that isn't valid until
the head has completed.
Conversely, if the same chain is performed but the fixed file slot is
already valid, then we would be unexpectedly returning data from the
old file slot rather than the newly opened one. Make sure we're
consistent here.
Allow deferral of file setup, which makes this documented case work.
Cc: stable@vger.kernel.org # v5.15+
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-03-29 16:10:08 +00:00
|
|
|
return true;
|
|
|
|
|
|
|
|
if (req->flags & REQ_F_FIXED_FILE)
|
2022-04-12 14:09:43 +00:00
|
|
|
req->file = io_file_get_fixed(req, req->cqe.fd, issue_flags);
|
io_uring: defer file assignment
If an application uses direct open or accept, it knows in advance what
direct descriptor value it will get as it picks it itself. This allows
combined requests such as:
sqe = io_uring_get_sqe(ring);
io_uring_prep_openat_direct(sqe, ..., file_slot);
sqe->flags |= IOSQE_IO_LINK | IOSQE_CQE_SKIP_SUCCESS;
sqe = io_uring_get_sqe(ring);
io_uring_prep_read(sqe,file_slot, buf, buf_size, 0);
sqe->flags |= IOSQE_FIXED_FILE;
io_uring_submit(ring);
where we prepare both a file open and read, and only get a completion
event for the read when both have completed successfully.
Currently links are fully prepared before the head is issued, but that
fails if the dependent link needs a file assigned that isn't valid until
the head has completed.
Conversely, if the same chain is performed but the fixed file slot is
already valid, then we would be unexpectedly returning data from the
old file slot rather than the newly opened one. Make sure we're
consistent here.
Allow deferral of file setup, which makes this documented case work.
Cc: stable@vger.kernel.org # v5.15+
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-03-29 16:10:08 +00:00
|
|
|
else
|
2022-04-12 14:09:43 +00:00
|
|
|
req->file = io_file_get_normal(req, req->cqe.fd);
|
io_uring: defer file assignment
If an application uses direct open or accept, it knows in advance what
direct descriptor value it will get as it picks it itself. This allows
combined requests such as:
sqe = io_uring_get_sqe(ring);
io_uring_prep_openat_direct(sqe, ..., file_slot);
sqe->flags |= IOSQE_IO_LINK | IOSQE_CQE_SKIP_SUCCESS;
sqe = io_uring_get_sqe(ring);
io_uring_prep_read(sqe,file_slot, buf, buf_size, 0);
sqe->flags |= IOSQE_FIXED_FILE;
io_uring_submit(ring);
where we prepare both a file open and read, and only get a completion
event for the read when both have completed successfully.
Currently links are fully prepared before the head is issued, but that
fails if the dependent link needs a file assigned that isn't valid until
the head has completed.
Conversely, if the same chain is performed but the fixed file slot is
already valid, then we would be unexpectedly returning data from the
old file slot rather than the newly opened one. Make sure we're
consistent here.
Allow deferral of file setup, which makes this documented case work.
Cc: stable@vger.kernel.org # v5.15+
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-03-29 16:10:08 +00:00
|
|
|
|
2022-04-18 19:51:12 +00:00
|
|
|
return !!req->file;
|
io_uring: defer file assignment
If an application uses direct open or accept, it knows in advance what
direct descriptor value it will get as it picks it itself. This allows
combined requests such as:
sqe = io_uring_get_sqe(ring);
io_uring_prep_openat_direct(sqe, ..., file_slot);
sqe->flags |= IOSQE_IO_LINK | IOSQE_CQE_SKIP_SUCCESS;
sqe = io_uring_get_sqe(ring);
io_uring_prep_read(sqe,file_slot, buf, buf_size, 0);
sqe->flags |= IOSQE_FIXED_FILE;
io_uring_submit(ring);
where we prepare both a file open and read, and only get a completion
event for the read when both have completed successfully.
Currently links are fully prepared before the head is issued, but that
fails if the dependent link needs a file assigned that isn't valid until
the head has completed.
Conversely, if the same chain is performed but the fixed file slot is
already valid, then we would be unexpectedly returning data from the
old file slot rather than the newly opened one. Make sure we're
consistent here.
Allow deferral of file setup, which makes this documented case work.
Cc: stable@vger.kernel.org # v5.15+
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-03-29 16:10:08 +00:00
|
|
|
}
|
|
|
|
|
2021-02-10 00:03:09 +00:00
|
|
|
static int io_issue_sqe(struct io_kiocb *req, unsigned int issue_flags)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
{
|
2023-01-12 14:44:10 +00:00
|
|
|
const struct io_issue_def *def = &io_issue_defs[req->opcode];
|
2021-02-27 22:57:30 +00:00
|
|
|
const struct cred *creds = NULL;
|
2019-12-18 02:53:05 +00:00
|
|
|
int ret;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
2023-01-20 16:10:30 +00:00
|
|
|
if (unlikely(!io_assign_file(req, def, issue_flags)))
|
2022-04-15 02:23:40 +00:00
|
|
|
return -EBADF;
|
|
|
|
|
2021-09-24 20:59:41 +00:00
|
|
|
if (unlikely((req->flags & REQ_F_CREDS) && req->creds != current_cred()))
|
2021-06-17 17:14:01 +00:00
|
|
|
creds = override_creds(req->creds);
|
2021-02-27 22:57:30 +00:00
|
|
|
|
2022-05-23 22:53:15 +00:00
|
|
|
if (!def->audit_skip)
|
2021-02-17 00:46:48 +00:00
|
|
|
audit_uring_entry(req->opcode);
|
|
|
|
|
2022-05-23 22:56:21 +00:00
|
|
|
ret = def->issue(req, issue_flags);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
2022-05-23 22:53:15 +00:00
|
|
|
if (!def->audit_skip)
|
2021-02-17 00:46:48 +00:00
|
|
|
audit_uring_exit(!ret, ret);
|
|
|
|
|
2021-02-27 22:57:30 +00:00
|
|
|
if (creds)
|
|
|
|
revert_creds(creds);
|
2022-05-24 21:21:00 +00:00
|
|
|
|
2022-06-16 09:21:58 +00:00
|
|
|
if (ret == IOU_OK) {
|
|
|
|
if (issue_flags & IO_URING_F_COMPLETE_DEFER)
|
2022-06-20 00:26:00 +00:00
|
|
|
io_req_complete_defer(req);
|
2022-06-16 09:21:58 +00:00
|
|
|
else
|
2022-11-23 11:33:41 +00:00
|
|
|
io_req_complete_post(req, issue_flags);
|
2022-05-24 21:21:00 +00:00
|
|
|
|
2023-12-01 00:38:52 +00:00
|
|
|
return 0;
|
|
|
|
}
|
2019-01-09 15:59:42 +00:00
|
|
|
|
2023-12-01 00:38:53 +00:00
|
|
|
if (ret == IOU_ISSUE_SKIP_COMPLETE) {
|
|
|
|
ret = 0;
|
|
|
|
io_arm_ltimeout(req);
|
2019-01-09 15:59:42 +00:00
|
|
|
|
2023-12-01 00:38:53 +00:00
|
|
|
/* If the op doesn't have a file, we're not polling for it */
|
|
|
|
if ((req->ctx->flags & IORING_SETUP_IOPOLL) && def->iopoll_queue)
|
|
|
|
io_iopoll_req_issued(req, issue_flags);
|
|
|
|
}
|
|
|
|
return ret;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
}
|
|
|
|
|
2023-03-27 15:38:15 +00:00
|
|
|
int io_poll_issue(struct io_kiocb *req, struct io_tw_state *ts)
|
2022-05-26 02:31:09 +00:00
|
|
|
{
|
2023-03-27 15:38:15 +00:00
|
|
|
io_tw_lock(req->ctx, ts);
|
2022-11-24 09:35:59 +00:00
|
|
|
return io_issue_sqe(req, IO_URING_F_NONBLOCK|IO_URING_F_MULTISHOT|
|
|
|
|
IO_URING_F_COMPLETE_DEFER);
|
2022-05-26 02:31:09 +00:00
|
|
|
}
|
|
|
|
|
2022-05-25 17:01:04 +00:00
|
|
|
struct io_wq_work *io_wq_free_work(struct io_wq_work *work)
|
2021-08-09 12:04:05 +00:00
|
|
|
{
|
|
|
|
struct io_kiocb *req = container_of(work, struct io_kiocb, work);
|
2023-06-23 11:23:21 +00:00
|
|
|
struct io_kiocb *nxt = NULL;
|
2021-08-09 12:04:05 +00:00
|
|
|
|
2023-06-23 11:23:21 +00:00
|
|
|
if (req_ref_put_and_test(req)) {
|
|
|
|
if (req->flags & IO_REQ_LINK_FLAGS)
|
|
|
|
nxt = io_req_find_next(req);
|
|
|
|
io_free_req(req);
|
|
|
|
}
|
|
|
|
return nxt ? &nxt->work : NULL;
|
2021-08-09 12:04:05 +00:00
|
|
|
}
|
|
|
|
|
2022-05-25 17:01:04 +00:00
|
|
|
void io_wq_submit_work(struct io_wq_work *work)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
{
|
|
|
|
struct io_kiocb *req = container_of(work, struct io_kiocb, work);
|
2023-01-12 14:44:10 +00:00
|
|
|
const struct io_issue_def *def = &io_issue_defs[req->opcode];
|
2022-12-07 03:53:30 +00:00
|
|
|
unsigned int issue_flags = IO_URING_F_UNLOCKED | IO_URING_F_IOWQ;
|
2021-10-23 11:13:57 +00:00
|
|
|
bool needs_poll = false;
|
io_uring: defer file assignment
If an application uses direct open or accept, it knows in advance what
direct descriptor value it will get as it picks it itself. This allows
combined requests such as:
sqe = io_uring_get_sqe(ring);
io_uring_prep_openat_direct(sqe, ..., file_slot);
sqe->flags |= IOSQE_IO_LINK | IOSQE_CQE_SKIP_SUCCESS;
sqe = io_uring_get_sqe(ring);
io_uring_prep_read(sqe,file_slot, buf, buf_size, 0);
sqe->flags |= IOSQE_FIXED_FILE;
io_uring_submit(ring);
where we prepare both a file open and read, and only get a completion
event for the read when both have completed successfully.
Currently links are fully prepared before the head is issued, but that
fails if the dependent link needs a file assigned that isn't valid until
the head has completed.
Conversely, if the same chain is performed but the fixed file slot is
already valid, then we would be unexpectedly returning data from the
old file slot rather than the newly opened one. Make sure we're
consistent here.
Allow deferral of file setup, which makes this documented case work.
Cc: stable@vger.kernel.org # v5.15+
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-03-29 16:10:08 +00:00
|
|
|
int ret = 0, err = -ECANCELED;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
2022-11-10 12:21:03 +00:00
|
|
|
/* one will be dropped by ->io_wq_free_work() after returning to io-wq */
|
2021-08-15 09:40:18 +00:00
|
|
|
if (!(req->flags & REQ_F_REFCOUNT))
|
|
|
|
__io_req_set_refcount(req, 2);
|
|
|
|
else
|
|
|
|
req_ref_get(req);
|
io_uring: remove submission references
Requests are by default given with two references, submission and
completion. Completion references are straightforward, they represent
request ownership and are put when a request is completed or so.
Submission references are a bit more trickier. They're needed when
io_issue_sqe() followed deep into the submission stack (e.g. in fs,
block, drivers, etc.), request may have given away for concurrent
execution or already completed, and the code unwinding back to
io_issue_sqe() may be accessing some pieces of our requests, e.g.
file or iov.
Now, we prevent such async/in-depth completions by pushing requests
through task_work. Punting to io-wq is also done through task_works,
apart from a couple of cases with a pretty well known context. So,
there're two cases:
1) io_issue_sqe() from the task context and protected by ->uring_lock.
Either requests return back to io_uring or handed to task_work, which
won't be executed because we're currently controlling that task. So,
we can be sure that requests are staying alive all the time and we don't
need submission references to pin them.
2) io_issue_sqe() from io-wq, which doesn't hold the mutex. The role of
submission reference is played by io-wq reference, which is put by
io_wq_submit_work(). Hence, it should be fine.
Considering that, we can carefully kill the submission reference.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/6b68f1c763229a590f2a27148aee77767a8d7750.1628705069.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-08-11 18:28:29 +00:00
|
|
|
|
2022-04-15 21:08:25 +00:00
|
|
|
io_arm_ltimeout(req);
|
io_uring: defer file assignment
If an application uses direct open or accept, it knows in advance what
direct descriptor value it will get as it picks it itself. This allows
combined requests such as:
sqe = io_uring_get_sqe(ring);
io_uring_prep_openat_direct(sqe, ..., file_slot);
sqe->flags |= IOSQE_IO_LINK | IOSQE_CQE_SKIP_SUCCESS;
sqe = io_uring_get_sqe(ring);
io_uring_prep_read(sqe,file_slot, buf, buf_size, 0);
sqe->flags |= IOSQE_FIXED_FILE;
io_uring_submit(ring);
where we prepare both a file open and read, and only get a completion
event for the read when both have completed successfully.
Currently links are fully prepared before the head is issued, but that
fails if the dependent link needs a file assigned that isn't valid until
the head has completed.
Conversely, if the same chain is performed but the fixed file slot is
already valid, then we would be unexpectedly returning data from the
old file slot rather than the newly opened one. Make sure we're
consistent here.
Allow deferral of file setup, which makes this documented case work.
Cc: stable@vger.kernel.org # v5.15+
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-03-29 16:10:08 +00:00
|
|
|
|
2021-08-23 12:30:44 +00:00
|
|
|
/* either cancelled or io-wq is dying, so don't touch tctx->iowq */
|
2024-06-13 19:28:27 +00:00
|
|
|
if (atomic_read(&work->flags) & IO_WQ_WORK_CANCEL) {
|
2022-04-12 14:24:43 +00:00
|
|
|
fail:
|
io_uring: defer file assignment
If an application uses direct open or accept, it knows in advance what
direct descriptor value it will get as it picks it itself. This allows
combined requests such as:
sqe = io_uring_get_sqe(ring);
io_uring_prep_openat_direct(sqe, ..., file_slot);
sqe->flags |= IOSQE_IO_LINK | IOSQE_CQE_SKIP_SUCCESS;
sqe = io_uring_get_sqe(ring);
io_uring_prep_read(sqe,file_slot, buf, buf_size, 0);
sqe->flags |= IOSQE_FIXED_FILE;
io_uring_submit(ring);
where we prepare both a file open and read, and only get a completion
event for the read when both have completed successfully.
Currently links are fully prepared before the head is issued, but that
fails if the dependent link needs a file assigned that isn't valid until
the head has completed.
Conversely, if the same chain is performed but the fixed file slot is
already valid, then we would be unexpectedly returning data from the
old file slot rather than the newly opened one. Make sure we're
consistent here.
Allow deferral of file setup, which makes this documented case work.
Cc: stable@vger.kernel.org # v5.15+
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-03-29 16:10:08 +00:00
|
|
|
io_req_task_queue_fail(req, err);
|
2021-10-23 11:13:57 +00:00
|
|
|
return;
|
|
|
|
}
|
2023-01-20 16:10:30 +00:00
|
|
|
if (!io_assign_file(req, def, issue_flags)) {
|
2022-04-12 14:24:43 +00:00
|
|
|
err = -EBADF;
|
2024-06-13 19:28:27 +00:00
|
|
|
atomic_or(IO_WQ_WORK_CANCEL, &work->flags);
|
2022-04-12 14:24:43 +00:00
|
|
|
goto fail;
|
|
|
|
}
|
2019-01-19 05:56:34 +00:00
|
|
|
|
2024-03-08 13:55:57 +00:00
|
|
|
/*
|
|
|
|
* If DEFER_TASKRUN is set, it's only allowed to post CQEs from the
|
|
|
|
* submitter task context. Final request completions are handed to the
|
|
|
|
* right context, however this is not the case of auxiliary CQEs,
|
|
|
|
* which is the main mean of operation for multishot requests.
|
|
|
|
* Don't allow any multishot execution from io-wq. It's more restrictive
|
|
|
|
* than necessary and also cleaner.
|
|
|
|
*/
|
|
|
|
if (req->flags & REQ_F_APOLL_MULTISHOT) {
|
|
|
|
err = -EBADFD;
|
|
|
|
if (!io_file_can_poll(req))
|
|
|
|
goto fail;
|
2024-04-01 17:30:06 +00:00
|
|
|
if (req->file->f_flags & O_NONBLOCK ||
|
|
|
|
req->file->f_mode & FMODE_NOWAIT) {
|
|
|
|
err = -ECANCELED;
|
|
|
|
if (io_arm_poll_handler(req, issue_flags) != IO_APOLL_OK)
|
|
|
|
goto fail;
|
|
|
|
return;
|
|
|
|
} else {
|
|
|
|
req->flags &= ~REQ_F_APOLL_MULTISHOT;
|
|
|
|
}
|
2024-03-08 13:55:57 +00:00
|
|
|
}
|
|
|
|
|
2021-10-23 11:13:57 +00:00
|
|
|
if (req->flags & REQ_F_FORCE_ASYNC) {
|
2021-10-23 11:13:59 +00:00
|
|
|
bool opcode_poll = def->pollin || def->pollout;
|
|
|
|
|
2024-01-29 03:08:24 +00:00
|
|
|
if (opcode_poll && io_file_can_poll(req)) {
|
2021-10-23 11:13:59 +00:00
|
|
|
needs_poll = true;
|
2021-10-23 11:13:57 +00:00
|
|
|
issue_flags |= IO_URING_F_NONBLOCK;
|
2021-10-23 11:13:59 +00:00
|
|
|
}
|
2019-10-24 13:25:42 +00:00
|
|
|
}
|
2019-01-19 05:56:34 +00:00
|
|
|
|
2021-10-23 11:13:57 +00:00
|
|
|
do {
|
|
|
|
ret = io_issue_sqe(req, issue_flags);
|
|
|
|
if (ret != -EAGAIN)
|
|
|
|
break;
|
2023-07-20 19:16:53 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* If REQ_F_NOWAIT is set, then don't wait or retry with
|
|
|
|
* poll. -EAGAIN is final for that case.
|
|
|
|
*/
|
|
|
|
if (req->flags & REQ_F_NOWAIT)
|
|
|
|
break;
|
|
|
|
|
2021-10-23 11:13:57 +00:00
|
|
|
/*
|
|
|
|
* We can get EAGAIN for iopolled IO even though we're
|
|
|
|
* forcing a sync submission from here, since we can't
|
|
|
|
* wait for request slots on the block side.
|
|
|
|
*/
|
|
|
|
if (!needs_poll) {
|
2022-05-13 10:24:56 +00:00
|
|
|
if (!(req->ctx->flags & IORING_SETUP_IOPOLL))
|
|
|
|
break;
|
2023-09-07 12:50:07 +00:00
|
|
|
if (io_wq_worker_stopped())
|
|
|
|
break;
|
2021-10-23 11:13:57 +00:00
|
|
|
cond_resched();
|
|
|
|
continue;
|
io_uring: implement async hybrid mode for pollable requests
The current logic of requests with IOSQE_ASYNC is first queueing it to
io-worker, then execute it in a synchronous way. For unbound works like
pollable requests(e.g. read/write a socketfd), the io-worker may stuck
there waiting for events for a long time. And thus other works wait in
the list for a long time too.
Let's introduce a new way for unbound works (currently pollable
requests), with this a request will first be queued to io-worker, then
executed in a nonblock try rather than a synchronous way. Failure of
that leads it to arm poll stuff and then the worker can begin to handle
other works.
The detail process of this kind of requests is:
step1: original context:
queue it to io-worker
step2: io-worker context:
nonblock try(the old logic is a synchronous try here)
|
|--fail--> arm poll
|
|--(fail/ready)-->synchronous issue
|
|--(succeed)-->worker finish it's job, tw
take over the req
This works much better than the old IOSQE_ASYNC logic in cases where
unbound max_worker is relatively small. In this case, number of
io-worker eazily increments to max_worker, new worker cannot be created
and running workers stuck there handling old works in IOSQE_ASYNC mode.
In my 64-core machine, set unbound max_worker to 20, run echo-server,
turns out:
(arguments: register_file, connetion number is 1000, message size is 12
Byte)
original IOSQE_ASYNC: 76664.151 tps
after this patch: 166934.985 tps
Suggested-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Hao Xu <haoxu@linux.alibaba.com>
Link: https://lore.kernel.org/r/20211018133445.103438-1-haoxu@linux.alibaba.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-10-18 13:34:45 +00:00
|
|
|
}
|
|
|
|
|
2022-03-15 16:54:08 +00:00
|
|
|
if (io_arm_poll_handler(req, issue_flags) == IO_APOLL_OK)
|
2021-10-23 11:13:57 +00:00
|
|
|
return;
|
|
|
|
/* aborted or ready, in either case retry blocking */
|
|
|
|
needs_poll = false;
|
|
|
|
issue_flags &= ~IO_URING_F_NONBLOCK;
|
|
|
|
} while (1);
|
2019-01-19 05:56:34 +00:00
|
|
|
|
2021-02-18 22:32:52 +00:00
|
|
|
/* avoid locking problems by failing it from a clean context */
|
2024-07-24 11:16:21 +00:00
|
|
|
if (ret)
|
2021-02-18 22:32:52 +00:00
|
|
|
io_req_task_queue_fail(req, ret);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
}
|
|
|
|
|
2022-05-25 03:19:47 +00:00
|
|
|
inline struct file *io_file_get_fixed(struct io_kiocb *req, int fd,
|
|
|
|
unsigned int issue_flags)
|
2019-03-13 18:39:28 +00:00
|
|
|
{
|
2022-04-04 23:18:43 +00:00
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
2023-06-20 11:32:35 +00:00
|
|
|
struct io_fixed_file *slot;
|
2022-04-04 23:18:43 +00:00
|
|
|
struct file *file = NULL;
|
2019-03-13 18:39:28 +00:00
|
|
|
|
2022-04-18 19:51:11 +00:00
|
|
|
io_ring_submit_lock(ctx, issue_flags);
|
2022-04-04 23:18:43 +00:00
|
|
|
|
2021-08-09 12:04:02 +00:00
|
|
|
if (unlikely((unsigned int)fd >= ctx->nr_user_files))
|
2022-04-04 23:18:43 +00:00
|
|
|
goto out;
|
2021-08-09 12:04:02 +00:00
|
|
|
fd = array_index_nospec(fd, ctx->nr_user_files);
|
2023-06-20 11:32:35 +00:00
|
|
|
slot = io_fixed_file_slot(&ctx->file_table, fd);
|
2024-01-11 20:34:33 +00:00
|
|
|
if (!req->rsrc_node)
|
|
|
|
__io_req_set_rsrc_node(req, ctx);
|
2023-06-20 11:32:35 +00:00
|
|
|
req->flags |= io_slot_flags(slot);
|
2024-01-11 20:34:33 +00:00
|
|
|
file = io_slot_file(slot);
|
2022-04-04 23:18:43 +00:00
|
|
|
out:
|
2022-04-18 19:51:11 +00:00
|
|
|
io_ring_submit_unlock(ctx, issue_flags);
|
2021-08-09 12:04:02 +00:00
|
|
|
return file;
|
|
|
|
}
|
2021-03-12 15:27:05 +00:00
|
|
|
|
2022-05-25 03:19:47 +00:00
|
|
|
struct file *io_file_get_normal(struct io_kiocb *req, int fd)
|
2021-08-09 12:04:02 +00:00
|
|
|
{
|
io_uring: remove file batch-get optimisation
For requests with non-fixed files, instead of grabbing just one
reference, we get by the number of left requests, so the following
requests using the same file can take it without atomics.
However, it's not all win. If there is one request in the middle
not using files or having a fixed file, we'll need to put back the left
references. Even worse if an application submits requests dealing with
different files, it will do a put for each new request, so doubling the
number of atomics needed. Also, even if not used, it's still takes some
cycles in the submission path.
If a file used many times, it rather makes sense to pre-register it, if
not, we may fall in the described pitfall. So, this optimisation is a
matter of use case. Go with the simpliest code-wise way, remove it.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-08-10 13:52:47 +00:00
|
|
|
struct file *file = fget(fd);
|
2021-08-09 12:04:02 +00:00
|
|
|
|
2022-06-16 12:57:20 +00:00
|
|
|
trace_io_uring_file_get(req, fd);
|
2019-03-13 18:39:28 +00:00
|
|
|
|
2021-08-09 12:04:02 +00:00
|
|
|
/* we don't allow fixed io_uring files */
|
2022-05-25 16:28:04 +00:00
|
|
|
if (file && io_is_uring_fops(file))
|
2022-06-02 05:57:02 +00:00
|
|
|
io_req_track_inflight(req);
|
2020-10-10 17:34:08 +00:00
|
|
|
return file;
|
2019-03-13 18:39:28 +00:00
|
|
|
}
|
|
|
|
|
2022-04-15 21:08:28 +00:00
|
|
|
static void io_queue_async(struct io_kiocb *req, int ret)
|
2021-09-24 20:59:59 +00:00
|
|
|
__must_hold(&req->ctx->uring_lock)
|
|
|
|
{
|
2022-04-15 21:08:28 +00:00
|
|
|
struct io_kiocb *linked_timeout;
|
|
|
|
|
|
|
|
if (ret != -EAGAIN || (req->flags & REQ_F_NOWAIT)) {
|
2022-11-24 09:35:53 +00:00
|
|
|
io_req_defer_failed(req, ret);
|
2022-04-15 21:08:28 +00:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
linked_timeout = io_prep_linked_timeout(req);
|
2021-09-24 20:59:59 +00:00
|
|
|
|
2022-03-15 16:54:08 +00:00
|
|
|
switch (io_arm_poll_handler(req, 0)) {
|
2021-09-24 20:59:59 +00:00
|
|
|
case IO_APOLL_READY:
|
2022-09-06 16:11:17 +00:00
|
|
|
io_kbuf_recycle(req, 0);
|
2021-09-24 20:59:59 +00:00
|
|
|
io_req_task_queue(req);
|
|
|
|
break;
|
|
|
|
case IO_APOLL_ABORTED:
|
2022-06-17 12:24:26 +00:00
|
|
|
io_kbuf_recycle(req, 0);
|
2024-03-18 22:00:28 +00:00
|
|
|
io_queue_iowq(req);
|
2021-09-24 20:59:59 +00:00
|
|
|
break;
|
2022-03-09 18:27:52 +00:00
|
|
|
case IO_APOLL_OK:
|
|
|
|
break;
|
2021-09-24 20:59:59 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
if (linked_timeout)
|
|
|
|
io_queue_linked_timeout(linked_timeout);
|
|
|
|
}
|
|
|
|
|
2022-04-15 21:08:26 +00:00
|
|
|
static inline void io_queue_sqe(struct io_kiocb *req)
|
2021-08-09 12:04:10 +00:00
|
|
|
__must_hold(&req->ctx->uring_lock)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
{
|
2019-03-12 16:18:47 +00:00
|
|
|
int ret;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
2021-02-10 00:03:22 +00:00
|
|
|
ret = io_issue_sqe(req, IO_URING_F_NONBLOCK|IO_URING_F_COMPLETE_DEFER);
|
2020-02-23 06:22:19 +00:00
|
|
|
|
2019-10-17 15:20:46 +00:00
|
|
|
/*
|
|
|
|
* We async punt it if the file wasn't marked NOWAIT, or if the file
|
|
|
|
* doesn't support non-blocking read/write attempts
|
|
|
|
*/
|
2023-12-01 00:38:53 +00:00
|
|
|
if (unlikely(ret))
|
2022-04-15 21:08:28 +00:00
|
|
|
io_queue_async(req, ret);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
}
|
|
|
|
|
2021-09-24 20:59:58 +00:00
|
|
|
static void io_queue_sqe_fallback(struct io_kiocb *req)
|
2021-08-09 12:04:10 +00:00
|
|
|
__must_hold(&req->ctx->uring_lock)
|
2019-09-09 12:50:40 +00:00
|
|
|
{
|
2022-04-15 21:08:32 +00:00
|
|
|
if (unlikely(req->flags & REQ_F_FAIL)) {
|
|
|
|
/*
|
|
|
|
* We don't submit, fail them all, for that replace hardlinks
|
|
|
|
* with normal links. Extra REQ_F_LINK is tolerated.
|
|
|
|
*/
|
|
|
|
req->flags &= ~REQ_F_HARDLINK;
|
|
|
|
req->flags |= REQ_F_LINK;
|
2022-11-24 09:35:53 +00:00
|
|
|
io_req_defer_failed(req, req->cqe.res);
|
2021-06-14 22:37:30 +00:00
|
|
|
} else {
|
2023-01-27 10:59:11 +00:00
|
|
|
if (unlikely(req->ctx->drain_active))
|
|
|
|
io_drain_req(req);
|
2021-06-14 22:37:30 +00:00
|
|
|
else
|
2024-03-18 22:00:28 +00:00
|
|
|
io_queue_iowq(req);
|
2019-12-17 15:04:44 +00:00
|
|
|
}
|
2019-09-09 12:50:40 +00:00
|
|
|
}
|
|
|
|
|
2021-02-18 18:29:40 +00:00
|
|
|
/*
|
|
|
|
* Check SQE restrictions (opcode and flags).
|
|
|
|
*
|
|
|
|
* Returns 'true' if SQE is allowed, 'false' otherwise.
|
|
|
|
*/
|
|
|
|
static inline bool io_check_restriction(struct io_ring_ctx *ctx,
|
|
|
|
struct io_kiocb *req,
|
|
|
|
unsigned int sqe_flags)
|
2019-09-09 12:50:40 +00:00
|
|
|
{
|
2021-02-18 18:29:40 +00:00
|
|
|
if (!test_bit(req->opcode, ctx->restrictions.sqe_op))
|
|
|
|
return false;
|
|
|
|
|
|
|
|
if ((sqe_flags & ctx->restrictions.sqe_flags_required) !=
|
|
|
|
ctx->restrictions.sqe_flags_required)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
if (sqe_flags & ~(ctx->restrictions.sqe_flags_allowed |
|
|
|
|
ctx->restrictions.sqe_flags_required))
|
|
|
|
return false;
|
|
|
|
|
|
|
|
return true;
|
2019-09-09 12:50:40 +00:00
|
|
|
}
|
|
|
|
|
2021-10-01 17:07:00 +00:00
|
|
|
static void io_init_req_drain(struct io_kiocb *req)
|
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
|
|
|
struct io_kiocb *head = ctx->submit_state.link.head;
|
|
|
|
|
|
|
|
ctx->drain_active = true;
|
|
|
|
if (head) {
|
|
|
|
/*
|
|
|
|
* If we need to drain a request in the middle of a link, drain
|
|
|
|
* the head request and the next request/link after the current
|
|
|
|
* link. Considering sequential execution of links,
|
2021-11-25 09:21:03 +00:00
|
|
|
* REQ_F_IO_DRAIN will be maintained for every request of our
|
2021-10-01 17:07:00 +00:00
|
|
|
* link.
|
|
|
|
*/
|
2021-11-25 09:21:03 +00:00
|
|
|
head->flags |= REQ_F_IO_DRAIN | REQ_F_FORCE_ASYNC;
|
2021-10-01 17:07:00 +00:00
|
|
|
ctx->drain_next = true;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2024-03-16 15:51:40 +00:00
|
|
|
static __cold int io_init_fail_req(struct io_kiocb *req, int err)
|
|
|
|
{
|
|
|
|
/* ensure per-opcode data is cleared if we fail before prep */
|
|
|
|
memset(&req->cmd.data, 0, sizeof(req->cmd.data));
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
2021-02-18 18:29:40 +00:00
|
|
|
static int io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req,
|
|
|
|
const struct io_uring_sqe *sqe)
|
2021-08-09 12:04:10 +00:00
|
|
|
__must_hold(&ctx->uring_lock)
|
2021-02-18 18:29:40 +00:00
|
|
|
{
|
2023-01-12 14:44:10 +00:00
|
|
|
const struct io_issue_def *def;
|
2021-02-18 18:29:40 +00:00
|
|
|
unsigned int sqe_flags;
|
2021-10-01 17:07:02 +00:00
|
|
|
int personality;
|
2021-10-06 15:06:49 +00:00
|
|
|
u8 opcode;
|
2021-02-18 18:29:40 +00:00
|
|
|
|
2021-08-09 12:04:08 +00:00
|
|
|
/* req is partially pre-initialised, see io_preinit_req() */
|
2021-10-06 15:06:49 +00:00
|
|
|
req->opcode = opcode = READ_ONCE(sqe->opcode);
|
2021-02-18 18:29:40 +00:00
|
|
|
/* same numerical values with corresponding REQ_F_*, safe to copy */
|
2024-01-29 03:05:47 +00:00
|
|
|
sqe_flags = READ_ONCE(sqe->flags);
|
|
|
|
req->flags = (io_req_flags_t) sqe_flags;
|
2022-04-12 14:09:43 +00:00
|
|
|
req->cqe.user_data = READ_ONCE(sqe->user_data);
|
2021-02-18 18:29:40 +00:00
|
|
|
req->file = NULL;
|
2022-04-18 19:51:13 +00:00
|
|
|
req->rsrc_node = NULL;
|
2021-02-18 18:29:40 +00:00
|
|
|
req->task = current;
|
2024-06-14 00:04:29 +00:00
|
|
|
req->cancel_seq_set = false;
|
2021-02-18 18:29:40 +00:00
|
|
|
|
2021-10-06 15:06:49 +00:00
|
|
|
if (unlikely(opcode >= IORING_OP_LAST)) {
|
|
|
|
req->opcode = 0;
|
2024-03-16 15:51:40 +00:00
|
|
|
return io_init_fail_req(req, -EINVAL);
|
2021-10-06 15:06:49 +00:00
|
|
|
}
|
2023-01-12 14:44:10 +00:00
|
|
|
def = &io_issue_defs[opcode];
|
2021-09-15 11:03:38 +00:00
|
|
|
if (unlikely(sqe_flags & ~SQE_COMMON_FLAGS)) {
|
|
|
|
/* enforce forwards compatibility on users */
|
|
|
|
if (sqe_flags & ~SQE_VALID_FLAGS)
|
2024-03-16 15:51:40 +00:00
|
|
|
return io_init_fail_req(req, -EINVAL);
|
2022-04-29 01:09:43 +00:00
|
|
|
if (sqe_flags & IOSQE_BUFFER_SELECT) {
|
2022-05-23 22:53:15 +00:00
|
|
|
if (!def->buffer_select)
|
2024-03-16 15:51:40 +00:00
|
|
|
return io_init_fail_req(req, -EOPNOTSUPP);
|
2022-04-29 01:09:43 +00:00
|
|
|
req->buf_index = READ_ONCE(sqe->buf_group);
|
|
|
|
}
|
2021-11-10 15:49:34 +00:00
|
|
|
if (sqe_flags & IOSQE_CQE_SKIP_SUCCESS)
|
|
|
|
ctx->drain_disabled = true;
|
|
|
|
if (sqe_flags & IOSQE_IO_DRAIN) {
|
|
|
|
if (ctx->drain_disabled)
|
2024-03-16 15:51:40 +00:00
|
|
|
return io_init_fail_req(req, -EOPNOTSUPP);
|
2021-10-01 17:07:00 +00:00
|
|
|
io_init_req_drain(req);
|
2021-11-10 15:49:34 +00:00
|
|
|
}
|
2021-09-24 20:59:57 +00:00
|
|
|
}
|
|
|
|
if (unlikely(ctx->restricted || ctx->drain_active || ctx->drain_next)) {
|
|
|
|
if (ctx->restricted && !io_check_restriction(ctx, req, sqe_flags))
|
2024-03-16 15:51:40 +00:00
|
|
|
return io_init_fail_req(req, -EACCES);
|
2021-09-24 20:59:57 +00:00
|
|
|
/* knock it to the slow queue path, will be drained there */
|
|
|
|
if (ctx->drain_active)
|
|
|
|
req->flags |= REQ_F_FORCE_ASYNC;
|
|
|
|
/* if there is no link, we're at "next" request and need to drain */
|
|
|
|
if (unlikely(ctx->drain_next) && !ctx->submit_state.link.head) {
|
|
|
|
ctx->drain_next = false;
|
|
|
|
ctx->drain_active = true;
|
2021-11-25 09:21:03 +00:00
|
|
|
req->flags |= REQ_F_IO_DRAIN | REQ_F_FORCE_ASYNC;
|
2021-09-24 20:59:57 +00:00
|
|
|
}
|
2021-09-15 11:03:38 +00:00
|
|
|
}
|
2021-02-18 18:29:40 +00:00
|
|
|
|
2022-05-23 22:53:15 +00:00
|
|
|
if (!def->ioprio && sqe->ioprio)
|
2024-03-16 15:51:40 +00:00
|
|
|
return io_init_fail_req(req, -EINVAL);
|
2022-05-23 22:53:15 +00:00
|
|
|
if (!def->iopoll && (ctx->flags & IORING_SETUP_IOPOLL))
|
2024-03-16 15:51:40 +00:00
|
|
|
return io_init_fail_req(req, -EINVAL);
|
2022-04-26 17:34:56 +00:00
|
|
|
|
2022-05-23 22:53:15 +00:00
|
|
|
if (def->needs_file) {
|
2021-10-06 15:06:46 +00:00
|
|
|
struct io_submit_state *state = &ctx->submit_state;
|
|
|
|
|
2022-04-12 14:09:43 +00:00
|
|
|
req->cqe.fd = READ_ONCE(sqe->fd);
|
io_uring: defer file assignment
If an application uses direct open or accept, it knows in advance what
direct descriptor value it will get as it picks it itself. This allows
combined requests such as:
sqe = io_uring_get_sqe(ring);
io_uring_prep_openat_direct(sqe, ..., file_slot);
sqe->flags |= IOSQE_IO_LINK | IOSQE_CQE_SKIP_SUCCESS;
sqe = io_uring_get_sqe(ring);
io_uring_prep_read(sqe,file_slot, buf, buf_size, 0);
sqe->flags |= IOSQE_FIXED_FILE;
io_uring_submit(ring);
where we prepare both a file open and read, and only get a completion
event for the read when both have completed successfully.
Currently links are fully prepared before the head is issued, but that
fails if the dependent link needs a file assigned that isn't valid until
the head has completed.
Conversely, if the same chain is performed but the fixed file slot is
already valid, then we would be unexpectedly returning data from the
old file slot rather than the newly opened one. Make sure we're
consistent here.
Allow deferral of file setup, which makes this documented case work.
Cc: stable@vger.kernel.org # v5.15+
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-03-29 16:10:08 +00:00
|
|
|
|
2021-10-06 15:06:46 +00:00
|
|
|
/*
|
|
|
|
* Plug now if we have more than 2 IO left after this, and the
|
|
|
|
* target is potentially a read/write to block based storage.
|
|
|
|
*/
|
2022-05-23 22:53:15 +00:00
|
|
|
if (state->need_plug && def->plug) {
|
2021-10-06 15:06:46 +00:00
|
|
|
state->plug_started = true;
|
|
|
|
state->need_plug = false;
|
2021-10-06 17:01:42 +00:00
|
|
|
blk_start_plug_nr_ios(&state->plug, state->submit_nr);
|
2021-10-06 15:06:46 +00:00
|
|
|
}
|
2021-02-18 18:29:40 +00:00
|
|
|
}
|
2020-10-27 23:25:35 +00:00
|
|
|
|
2021-03-06 16:22:27 +00:00
|
|
|
personality = READ_ONCE(sqe->personality);
|
|
|
|
if (personality) {
|
2021-11-02 04:06:18 +00:00
|
|
|
int ret;
|
|
|
|
|
2021-06-17 17:14:01 +00:00
|
|
|
req->creds = xa_load(&ctx->personalities, personality);
|
|
|
|
if (!req->creds)
|
2024-03-16 15:51:40 +00:00
|
|
|
return io_init_fail_req(req, -EINVAL);
|
2021-06-17 17:14:01 +00:00
|
|
|
get_cred(req->creds);
|
lsm,io_uring: add LSM hooks to io_uring
A full expalantion of io_uring is beyond the scope of this commit
description, but in summary it is an asynchronous I/O mechanism
which allows for I/O requests and the resulting data to be queued
in memory mapped "rings" which are shared between the kernel and
userspace. Optionally, io_uring offers the ability for applications
to spawn kernel threads to dequeue I/O requests from the ring and
submit the requests in the kernel, helping to minimize the syscall
overhead. Rings are accessed in userspace by memory mapping a file
descriptor provided by the io_uring_setup(2), and can be shared
between applications as one might do with any open file descriptor.
Finally, process credentials can be registered with a given ring
and any process with access to that ring can submit I/O requests
using any of the registered credentials.
While the io_uring functionality is widely recognized as offering a
vastly improved, and high performing asynchronous I/O mechanism, its
ability to allow processes to submit I/O requests with credentials
other than its own presents a challenge to LSMs. When a process
creates a new io_uring ring the ring's credentials are inhertied
from the calling process; if this ring is shared with another
process operating with different credentials there is the potential
to bypass the LSMs security policy. Similarly, registering
credentials with a given ring allows any process with access to that
ring to submit I/O requests with those credentials.
In an effort to allow LSMs to apply security policy to io_uring I/O
operations, this patch adds two new LSM hooks. These hooks, in
conjunction with the LSM anonymous inode support previously
submitted, allow an LSM to apply access control policy to the
sharing of io_uring rings as well as any io_uring credential changes
requested by a process.
The new LSM hooks are described below:
* int security_uring_override_creds(cred)
Controls if the current task, executing an io_uring operation,
is allowed to override it's credentials with @cred. In cases
where the current task is a user application, the current
credentials will be those of the user application. In cases
where the current task is a kernel thread servicing io_uring
requests the current credentials will be those of the io_uring
ring (inherited from the process that created the ring).
* int security_uring_sqpoll(void)
Controls if the current task is allowed to create an io_uring
polling thread (IORING_SETUP_SQPOLL). Without a SQPOLL thread
in the kernel processes must submit I/O requests via
io_uring_enter(2) which allows us to compare any requested
credential changes against the application making the request.
With a SQPOLL thread, we can no longer compare requested
credential changes against the application making the request,
the comparison is made against the ring's credentials.
Signed-off-by: Paul Moore <paul@paul-moore.com>
2021-02-02 00:56:49 +00:00
|
|
|
ret = security_uring_override_creds(req->creds);
|
|
|
|
if (ret) {
|
|
|
|
put_cred(req->creds);
|
2024-03-16 15:51:40 +00:00
|
|
|
return io_init_fail_req(req, ret);
|
lsm,io_uring: add LSM hooks to io_uring
A full expalantion of io_uring is beyond the scope of this commit
description, but in summary it is an asynchronous I/O mechanism
which allows for I/O requests and the resulting data to be queued
in memory mapped "rings" which are shared between the kernel and
userspace. Optionally, io_uring offers the ability for applications
to spawn kernel threads to dequeue I/O requests from the ring and
submit the requests in the kernel, helping to minimize the syscall
overhead. Rings are accessed in userspace by memory mapping a file
descriptor provided by the io_uring_setup(2), and can be shared
between applications as one might do with any open file descriptor.
Finally, process credentials can be registered with a given ring
and any process with access to that ring can submit I/O requests
using any of the registered credentials.
While the io_uring functionality is widely recognized as offering a
vastly improved, and high performing asynchronous I/O mechanism, its
ability to allow processes to submit I/O requests with credentials
other than its own presents a challenge to LSMs. When a process
creates a new io_uring ring the ring's credentials are inhertied
from the calling process; if this ring is shared with another
process operating with different credentials there is the potential
to bypass the LSMs security policy. Similarly, registering
credentials with a given ring allows any process with access to that
ring to submit I/O requests with those credentials.
In an effort to allow LSMs to apply security policy to io_uring I/O
operations, this patch adds two new LSM hooks. These hooks, in
conjunction with the LSM anonymous inode support previously
submitted, allow an LSM to apply access control policy to the
sharing of io_uring rings as well as any io_uring credential changes
requested by a process.
The new LSM hooks are described below:
* int security_uring_override_creds(cred)
Controls if the current task, executing an io_uring operation,
is allowed to override it's credentials with @cred. In cases
where the current task is a user application, the current
credentials will be those of the user application. In cases
where the current task is a kernel thread servicing io_uring
requests the current credentials will be those of the io_uring
ring (inherited from the process that created the ring).
* int security_uring_sqpoll(void)
Controls if the current task is allowed to create an io_uring
polling thread (IORING_SETUP_SQPOLL). Without a SQPOLL thread
in the kernel processes must submit I/O requests via
io_uring_enter(2) which allows us to compare any requested
credential changes against the application making the request.
With a SQPOLL thread, we can no longer compare requested
credential changes against the application making the request,
the comparison is made against the ring's credentials.
Signed-off-by: Paul Moore <paul@paul-moore.com>
2021-02-02 00:56:49 +00:00
|
|
|
}
|
2021-06-17 17:14:02 +00:00
|
|
|
req->flags |= REQ_F_CREDS;
|
2021-03-06 16:22:27 +00:00
|
|
|
}
|
2021-02-18 18:29:40 +00:00
|
|
|
|
2022-05-23 22:56:21 +00:00
|
|
|
return def->prep(req, sqe);
|
2021-02-18 18:29:40 +00:00
|
|
|
}
|
|
|
|
|
2022-04-15 21:08:30 +00:00
|
|
|
static __cold int io_submit_fail_init(const struct io_uring_sqe *sqe,
|
|
|
|
struct io_kiocb *req, int ret)
|
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx = req->ctx;
|
|
|
|
struct io_submit_link *link = &ctx->submit_state.link;
|
|
|
|
struct io_kiocb *head = link->head;
|
|
|
|
|
2022-06-16 12:57:20 +00:00
|
|
|
trace_io_uring_req_failed(sqe, req, ret);
|
2022-04-15 21:08:30 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Avoid breaking links in the middle as it renders links with SQPOLL
|
|
|
|
* unusable. Instead of failing eagerly, continue assembling the link if
|
|
|
|
* applicable and mark the head with REQ_F_FAIL. The link flushing code
|
|
|
|
* should find the flag and handle the rest.
|
|
|
|
*/
|
|
|
|
req_fail_link_node(req, ret);
|
|
|
|
if (head && !(head->flags & REQ_F_FAIL))
|
|
|
|
req_fail_link_node(head, -ECANCELED);
|
|
|
|
|
|
|
|
if (!(req->flags & IO_REQ_LINK_FLAGS)) {
|
|
|
|
if (head) {
|
|
|
|
link->last->link = req;
|
|
|
|
link->head = NULL;
|
|
|
|
req = head;
|
|
|
|
}
|
|
|
|
io_queue_sqe_fallback(req);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (head)
|
|
|
|
link->last->link = req;
|
|
|
|
else
|
|
|
|
link->head = req;
|
|
|
|
link->last = req;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline int io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,
|
2021-02-18 18:29:42 +00:00
|
|
|
const struct io_uring_sqe *sqe)
|
2021-08-09 12:04:10 +00:00
|
|
|
__must_hold(&ctx->uring_lock)
|
2019-05-10 22:07:28 +00:00
|
|
|
{
|
2021-02-18 18:29:42 +00:00
|
|
|
struct io_submit_link *link = &ctx->submit_state.link;
|
2020-04-11 23:05:05 +00:00
|
|
|
int ret;
|
2019-05-10 22:07:28 +00:00
|
|
|
|
2021-02-18 18:29:41 +00:00
|
|
|
ret = io_init_req(ctx, req, sqe);
|
2022-04-15 21:08:30 +00:00
|
|
|
if (unlikely(ret))
|
|
|
|
return io_submit_fail_init(sqe, req, ret);
|
2021-06-14 22:37:31 +00:00
|
|
|
|
2023-03-30 16:03:41 +00:00
|
|
|
trace_io_uring_submit_req(req);
|
2021-02-18 18:29:41 +00:00
|
|
|
|
2019-05-10 22:07:28 +00:00
|
|
|
/*
|
|
|
|
* If we already have a head request, queue this one for async
|
|
|
|
* submittal once the head completes. If we don't have a head but
|
|
|
|
* IOSQE_IO_LINK is set in the sqe, start a new head. This one will be
|
|
|
|
* submitted sync once the chain is complete. If none of those
|
|
|
|
* conditions are true (normal request), then just queue it.
|
|
|
|
*/
|
2022-04-15 21:08:31 +00:00
|
|
|
if (unlikely(link->head)) {
|
2022-06-16 12:57:20 +00:00
|
|
|
trace_io_uring_link(req, link->head);
|
2020-10-27 23:25:37 +00:00
|
|
|
link->last->link = req;
|
2020-10-27 23:25:35 +00:00
|
|
|
link->last = req;
|
2019-12-17 19:26:58 +00:00
|
|
|
|
2022-04-15 21:08:29 +00:00
|
|
|
if (req->flags & IO_REQ_LINK_FLAGS)
|
2021-09-24 20:59:56 +00:00
|
|
|
return 0;
|
2022-04-15 21:08:30 +00:00
|
|
|
/* last request of the link, flush it */
|
|
|
|
req = link->head;
|
2021-09-24 20:59:56 +00:00
|
|
|
link->head = NULL;
|
2022-04-15 21:08:31 +00:00
|
|
|
if (req->flags & (REQ_F_FORCE_ASYNC | REQ_F_FAIL))
|
|
|
|
goto fallback;
|
|
|
|
|
|
|
|
} else if (unlikely(req->flags & (IO_REQ_LINK_FLAGS |
|
|
|
|
REQ_F_FORCE_ASYNC | REQ_F_FAIL))) {
|
|
|
|
if (req->flags & IO_REQ_LINK_FLAGS) {
|
|
|
|
link->head = req;
|
|
|
|
link->last = req;
|
|
|
|
} else {
|
|
|
|
fallback:
|
|
|
|
io_queue_sqe_fallback(req);
|
|
|
|
}
|
2021-09-24 20:59:56 +00:00
|
|
|
return 0;
|
2019-05-10 22:07:28 +00:00
|
|
|
}
|
2019-12-05 13:15:45 +00:00
|
|
|
|
2021-09-24 20:59:56 +00:00
|
|
|
io_queue_sqe(req);
|
2020-04-11 23:05:03 +00:00
|
|
|
return 0;
|
2019-05-10 22:07:28 +00:00
|
|
|
}
|
|
|
|
|
2019-01-09 16:06:50 +00:00
|
|
|
/*
|
|
|
|
* Batched submission is done, ensure local IO is flushed out.
|
|
|
|
*/
|
2021-09-24 20:59:55 +00:00
|
|
|
static void io_submit_state_end(struct io_ring_ctx *ctx)
|
2019-01-09 16:06:50 +00:00
|
|
|
{
|
2021-09-24 20:59:55 +00:00
|
|
|
struct io_submit_state *state = &ctx->submit_state;
|
|
|
|
|
2022-04-12 14:09:45 +00:00
|
|
|
if (unlikely(state->link.head))
|
|
|
|
io_queue_sqe_fallback(state->link.head);
|
2021-09-24 20:59:55 +00:00
|
|
|
/* flush only after queuing links as they can generate completions */
|
2021-09-08 15:40:52 +00:00
|
|
|
io_submit_flush_completions(ctx);
|
2020-10-28 15:33:23 +00:00
|
|
|
if (state->plug_started)
|
|
|
|
blk_finish_plug(&state->plug);
|
2019-01-09 16:06:50 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Start submission side cache.
|
|
|
|
*/
|
|
|
|
static void io_submit_state_start(struct io_submit_state *state,
|
2021-02-10 00:03:11 +00:00
|
|
|
unsigned int max_ios)
|
2019-01-09 16:06:50 +00:00
|
|
|
{
|
2020-10-28 15:33:23 +00:00
|
|
|
state->plug_started = false;
|
2021-09-08 15:40:49 +00:00
|
|
|
state->need_plug = max_ios > 2;
|
2021-10-06 17:01:42 +00:00
|
|
|
state->submit_nr = max_ios;
|
2021-02-18 18:29:42 +00:00
|
|
|
/* set only head, no need to init link_last in advance */
|
|
|
|
state->link.head = NULL;
|
2019-01-09 16:06:50 +00:00
|
|
|
}
|
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
static void io_commit_sqring(struct io_ring_ctx *ctx)
|
|
|
|
{
|
2019-08-26 17:23:46 +00:00
|
|
|
struct io_rings *rings = ctx->rings;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
2019-12-30 18:24:46 +00:00
|
|
|
/*
|
|
|
|
* Ensure any loads from the SQEs are done at this point,
|
|
|
|
* since once we write the new head, the application could
|
|
|
|
* write new data to them.
|
|
|
|
*/
|
|
|
|
smp_store_release(&rings->sq.head, ctx->cached_sq_head);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2021-06-04 16:42:56 +00:00
|
|
|
* Fetch an sqe, if one is available. Note this returns a pointer to memory
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
* that is mapped by userspace. This means that care needs to be taken to
|
|
|
|
* ensure that reads are stable, as we cannot rely on userspace always
|
|
|
|
* being a good citizen. If members of the sqe are validated and then later
|
|
|
|
* used, it's important that those reads are done through READ_ONCE() to
|
|
|
|
* prevent a re-load down the line.
|
|
|
|
*/
|
2023-01-23 14:37:15 +00:00
|
|
|
static bool io_get_sqe(struct io_ring_ctx *ctx, const struct io_uring_sqe **sqe)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
{
|
2023-08-24 22:53:32 +00:00
|
|
|
unsigned mask = ctx->sq_entries - 1;
|
|
|
|
unsigned head = ctx->cached_sq_head++ & mask;
|
|
|
|
|
|
|
|
if (!(ctx->flags & IORING_SETUP_NO_SQARRAY)) {
|
|
|
|
head = READ_ONCE(ctx->sq_array[head]);
|
|
|
|
if (unlikely(head >= ctx->sq_entries)) {
|
|
|
|
/* drop invalid entries */
|
|
|
|
spin_lock(&ctx->completion_lock);
|
|
|
|
ctx->cq_extra--;
|
|
|
|
spin_unlock(&ctx->completion_lock);
|
|
|
|
WRITE_ONCE(ctx->rings->sq_dropped,
|
|
|
|
READ_ONCE(ctx->rings->sq_dropped) + 1);
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
}
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* The cached sq head (or cq tail) serves two purposes:
|
|
|
|
*
|
|
|
|
* 1) allows us to batch the cost of updating the user visible
|
|
|
|
* head updates.
|
|
|
|
* 2) allows the kernel side to track the head on its own, even
|
|
|
|
* though the application is the one updating it.
|
|
|
|
*/
|
|
|
|
|
2023-08-24 22:53:32 +00:00
|
|
|
/* double index for 128-byte SQEs, twice as long */
|
|
|
|
if (ctx->flags & IORING_SETUP_SQE128)
|
|
|
|
head <<= 1;
|
|
|
|
*sqe = &ctx->sq_sqes[head];
|
|
|
|
return true;
|
2020-04-08 05:58:43 +00:00
|
|
|
}
|
|
|
|
|
2022-05-25 15:13:39 +00:00
|
|
|
int io_submit_sqes(struct io_ring_ctx *ctx, unsigned int nr)
|
2021-08-09 12:04:10 +00:00
|
|
|
__must_hold(&ctx->uring_lock)
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 18:22:30 +00:00
|
|
|
{
|
2021-09-24 21:00:01 +00:00
|
|
|
unsigned int entries = io_sqring_entries(ctx);
|
2022-04-12 14:09:49 +00:00
|
|
|
unsigned int left;
|
|
|
|
int ret;
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 18:22:30 +00:00
|
|
|
|
2021-10-04 19:02:47 +00:00
|
|
|
if (unlikely(!entries))
|
2021-09-24 21:00:01 +00:00
|
|
|
return 0;
|
2019-12-30 18:24:45 +00:00
|
|
|
/* make sure SQ entry isn't read before tail */
|
2023-03-30 16:05:31 +00:00
|
|
|
ret = left = min(nr, entries);
|
2022-04-12 14:09:49 +00:00
|
|
|
io_get_task_refs(left);
|
|
|
|
io_submit_state_start(&ctx->submit_state, left);
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 18:22:30 +00:00
|
|
|
|
2021-09-24 21:00:01 +00:00
|
|
|
do {
|
2019-12-20 01:24:38 +00:00
|
|
|
const struct io_uring_sqe *sqe;
|
2019-11-06 22:41:06 +00:00
|
|
|
struct io_kiocb *req;
|
2019-10-25 09:31:30 +00:00
|
|
|
|
2023-01-23 14:37:16 +00:00
|
|
|
if (unlikely(!io_alloc_req(ctx, &req)))
|
2019-10-25 09:31:30 +00:00
|
|
|
break;
|
2023-01-23 14:37:15 +00:00
|
|
|
if (unlikely(!io_get_sqe(ctx, &sqe))) {
|
2022-04-12 14:09:48 +00:00
|
|
|
io_req_add_to_cache(req, ctx);
|
2021-02-12 11:55:17 +00:00
|
|
|
break;
|
|
|
|
}
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 18:22:30 +00:00
|
|
|
|
2022-04-12 14:09:50 +00:00
|
|
|
/*
|
|
|
|
* Continue submitting even for sqe failure if the
|
|
|
|
* ring was setup with IORING_SETUP_SUBMIT_ALL
|
|
|
|
*/
|
|
|
|
if (unlikely(io_submit_sqe(ctx, req, sqe)) &&
|
|
|
|
!(ctx->flags & IORING_SETUP_SUBMIT_ALL)) {
|
|
|
|
left--;
|
|
|
|
break;
|
2022-03-10 19:59:35 +00:00
|
|
|
}
|
2022-04-12 14:09:50 +00:00
|
|
|
} while (--left);
|
2020-01-25 19:34:01 +00:00
|
|
|
|
2022-04-12 14:09:49 +00:00
|
|
|
if (unlikely(left)) {
|
|
|
|
ret -= left;
|
|
|
|
/* try again if it submitted nothing and can't allocate a req */
|
|
|
|
if (!ret && io_req_cache_empty(ctx))
|
|
|
|
ret = -EAGAIN;
|
|
|
|
current->io_uring->cached_refs += left;
|
2020-01-25 19:34:01 +00:00
|
|
|
}
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 18:22:30 +00:00
|
|
|
|
2021-09-24 20:59:55 +00:00
|
|
|
io_submit_state_end(ctx);
|
2019-11-05 21:22:14 +00:00
|
|
|
/* Commit SQ ring head once we've consumed and submitted all SQEs */
|
|
|
|
io_commit_sqring(ctx);
|
2022-04-12 14:09:49 +00:00
|
|
|
return ret;
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 18:22:30 +00:00
|
|
|
}
|
|
|
|
|
2019-09-24 19:47:15 +00:00
|
|
|
static int io_wake_function(struct wait_queue_entry *curr, unsigned int mode,
|
|
|
|
int wake_flags, void *key)
|
|
|
|
{
|
2023-01-09 14:46:04 +00:00
|
|
|
struct io_wait_queue *iowq = container_of(curr, struct io_wait_queue, wq);
|
2019-09-24 19:47:15 +00:00
|
|
|
|
2021-01-04 20:36:36 +00:00
|
|
|
/*
|
|
|
|
* Cannot safely flush overflowed CQEs from here, ensure we wake up
|
|
|
|
* the task, and the next invocation will do it.
|
|
|
|
*/
|
2023-01-09 14:46:04 +00:00
|
|
|
if (io_should_wake(iowq) || io_has_work(iowq->ctx))
|
2021-01-04 20:36:36 +00:00
|
|
|
return autoremove_wake_function(curr, mode, wake_flags, key);
|
|
|
|
return -1;
|
2019-09-24 19:47:15 +00:00
|
|
|
}
|
|
|
|
|
2022-08-30 12:50:10 +00:00
|
|
|
int io_run_task_work_sig(struct io_ring_ctx *ctx)
|
2020-09-24 19:32:18 +00:00
|
|
|
{
|
2023-01-05 11:22:22 +00:00
|
|
|
if (!llist_empty(&ctx->work_llist)) {
|
2023-01-09 14:46:05 +00:00
|
|
|
__set_current_state(TASK_RUNNING);
|
2024-01-31 17:50:08 +00:00
|
|
|
if (io_run_local_work(ctx, INT_MAX) > 0)
|
2023-08-11 12:53:47 +00:00
|
|
|
return 0;
|
2023-01-05 11:22:22 +00:00
|
|
|
}
|
|
|
|
if (io_run_task_work() > 0)
|
2023-08-11 12:53:47 +00:00
|
|
|
return 0;
|
2022-02-16 19:53:42 +00:00
|
|
|
if (task_sigpending(current))
|
|
|
|
return -EINTR;
|
|
|
|
return 0;
|
2020-09-24 19:32:18 +00:00
|
|
|
}
|
|
|
|
|
2023-07-24 17:28:17 +00:00
|
|
|
static bool current_pending_io(void)
|
|
|
|
{
|
|
|
|
struct io_uring_task *tctx = current->io_uring;
|
|
|
|
|
|
|
|
if (!tctx)
|
|
|
|
return false;
|
|
|
|
return percpu_counter_read_positive(&tctx->inflight);
|
|
|
|
}
|
|
|
|
|
2024-01-04 15:46:23 +00:00
|
|
|
static enum hrtimer_restart io_cqring_timer_wakeup(struct hrtimer *timer)
|
|
|
|
{
|
|
|
|
struct io_wait_queue *iowq = container_of(timer, struct io_wait_queue, t);
|
|
|
|
|
|
|
|
WRITE_ONCE(iowq->hit_timeout, 1);
|
io_uring: add support for batch wait timeout
Waiting for events with io_uring has two knobs that can be set:
1) The number of events to wake for
2) The timeout associated with the event
Waiting will abort when either of those conditions are met, as expected.
This adds support for a third event, which is associated with the number
of events to wait for. Applications generally like to handle batches of
completions, and right now they'd set a number of events to wait for and
the timeout for that. If no events have been received but the timeout
triggers, control is returned to the application and it can wait again.
However, if the application doesn't have anything to do until events are
reaped, then it's possible to make this waiting more efficient.
For example, the application may have a latency time of 50 usecs and
wanting to handle a batch of 8 requests at the time. If it uses 50 usecs
as the timeout, then it'll be doing 20K context switches per second even
if nothing is happening.
This introduces the notion of min batch wait time. If the min batch wait
time expires, then we'll return to userspace if we have any events at all.
If none are available, the general wait time is applied. Any request
arriving after the min batch wait time will cause waiting to stop and
return control to the application.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2024-01-04 17:17:54 +00:00
|
|
|
iowq->min_timeout = 0;
|
2024-01-04 15:46:23 +00:00
|
|
|
wake_up_process(iowq->wq.private);
|
|
|
|
return HRTIMER_NORESTART;
|
|
|
|
}
|
|
|
|
|
io_uring: add support for batch wait timeout
Waiting for events with io_uring has two knobs that can be set:
1) The number of events to wake for
2) The timeout associated with the event
Waiting will abort when either of those conditions are met, as expected.
This adds support for a third event, which is associated with the number
of events to wait for. Applications generally like to handle batches of
completions, and right now they'd set a number of events to wait for and
the timeout for that. If no events have been received but the timeout
triggers, control is returned to the application and it can wait again.
However, if the application doesn't have anything to do until events are
reaped, then it's possible to make this waiting more efficient.
For example, the application may have a latency time of 50 usecs and
wanting to handle a batch of 8 requests at the time. If it uses 50 usecs
as the timeout, then it'll be doing 20K context switches per second even
if nothing is happening.
This introduces the notion of min batch wait time. If the min batch wait
time expires, then we'll return to userspace if we have any events at all.
If none are available, the general wait time is applied. Any request
arriving after the min batch wait time will cause waiting to stop and
return control to the application.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2024-01-04 17:17:54 +00:00
|
|
|
/*
|
|
|
|
* Doing min_timeout portion. If we saw any timeouts, events, or have work,
|
|
|
|
* wake up. If not, and we have a normal timeout, switch to that and keep
|
|
|
|
* sleeping.
|
|
|
|
*/
|
|
|
|
static enum hrtimer_restart io_cqring_min_timer_wakeup(struct hrtimer *timer)
|
|
|
|
{
|
|
|
|
struct io_wait_queue *iowq = container_of(timer, struct io_wait_queue, t);
|
|
|
|
struct io_ring_ctx *ctx = iowq->ctx;
|
|
|
|
|
|
|
|
/* no general timeout, or shorter (or equal), we are done */
|
|
|
|
if (iowq->timeout == KTIME_MAX ||
|
|
|
|
ktime_compare(iowq->min_timeout, iowq->timeout) >= 0)
|
|
|
|
goto out_wake;
|
|
|
|
/* work we may need to run, wake function will see if we need to wake */
|
|
|
|
if (io_has_work(ctx))
|
|
|
|
goto out_wake;
|
|
|
|
/* got events since we started waiting, min timeout is done */
|
|
|
|
if (iowq->cq_min_tail != READ_ONCE(ctx->rings->cq.tail))
|
|
|
|
goto out_wake;
|
|
|
|
/* if we have any events and min timeout expired, we're done */
|
|
|
|
if (io_cqring_events(ctx))
|
|
|
|
goto out_wake;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If using deferred task_work running and application is waiting on
|
|
|
|
* more than one request, ensure we reset it now where we are switching
|
|
|
|
* to normal sleeps. Any request completion post min_wait should wake
|
|
|
|
* the task and return.
|
|
|
|
*/
|
|
|
|
if (ctx->flags & IORING_SETUP_DEFER_TASKRUN) {
|
|
|
|
atomic_set(&ctx->cq_wait_nr, 1);
|
|
|
|
smp_mb();
|
|
|
|
if (!llist_empty(&ctx->work_llist))
|
|
|
|
goto out_wake;
|
|
|
|
}
|
|
|
|
|
|
|
|
iowq->t.function = io_cqring_timer_wakeup;
|
|
|
|
hrtimer_set_expires(timer, iowq->timeout);
|
|
|
|
return HRTIMER_RESTART;
|
|
|
|
out_wake:
|
|
|
|
return io_cqring_timer_wakeup(timer);
|
|
|
|
}
|
|
|
|
|
2024-01-04 15:46:23 +00:00
|
|
|
static int io_cqring_schedule_timeout(struct io_wait_queue *iowq,
|
io_uring: add support for batch wait timeout
Waiting for events with io_uring has two knobs that can be set:
1) The number of events to wake for
2) The timeout associated with the event
Waiting will abort when either of those conditions are met, as expected.
This adds support for a third event, which is associated with the number
of events to wait for. Applications generally like to handle batches of
completions, and right now they'd set a number of events to wait for and
the timeout for that. If no events have been received but the timeout
triggers, control is returned to the application and it can wait again.
However, if the application doesn't have anything to do until events are
reaped, then it's possible to make this waiting more efficient.
For example, the application may have a latency time of 50 usecs and
wanting to handle a batch of 8 requests at the time. If it uses 50 usecs
as the timeout, then it'll be doing 20K context switches per second even
if nothing is happening.
This introduces the notion of min batch wait time. If the min batch wait
time expires, then we'll return to userspace if we have any events at all.
If none are available, the general wait time is applied. Any request
arriving after the min batch wait time will cause waiting to stop and
return control to the application.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2024-01-04 17:17:54 +00:00
|
|
|
clockid_t clock_id, ktime_t start_time)
|
2024-01-04 15:46:23 +00:00
|
|
|
{
|
io_uring: add support for batch wait timeout
Waiting for events with io_uring has two knobs that can be set:
1) The number of events to wake for
2) The timeout associated with the event
Waiting will abort when either of those conditions are met, as expected.
This adds support for a third event, which is associated with the number
of events to wait for. Applications generally like to handle batches of
completions, and right now they'd set a number of events to wait for and
the timeout for that. If no events have been received but the timeout
triggers, control is returned to the application and it can wait again.
However, if the application doesn't have anything to do until events are
reaped, then it's possible to make this waiting more efficient.
For example, the application may have a latency time of 50 usecs and
wanting to handle a batch of 8 requests at the time. If it uses 50 usecs
as the timeout, then it'll be doing 20K context switches per second even
if nothing is happening.
This introduces the notion of min batch wait time. If the min batch wait
time expires, then we'll return to userspace if we have any events at all.
If none are available, the general wait time is applied. Any request
arriving after the min batch wait time will cause waiting to stop and
return control to the application.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2024-01-04 17:17:54 +00:00
|
|
|
ktime_t timeout;
|
|
|
|
|
2024-01-04 15:46:23 +00:00
|
|
|
hrtimer_init_on_stack(&iowq->t, clock_id, HRTIMER_MODE_ABS);
|
io_uring: add support for batch wait timeout
Waiting for events with io_uring has two knobs that can be set:
1) The number of events to wake for
2) The timeout associated with the event
Waiting will abort when either of those conditions are met, as expected.
This adds support for a third event, which is associated with the number
of events to wait for. Applications generally like to handle batches of
completions, and right now they'd set a number of events to wait for and
the timeout for that. If no events have been received but the timeout
triggers, control is returned to the application and it can wait again.
However, if the application doesn't have anything to do until events are
reaped, then it's possible to make this waiting more efficient.
For example, the application may have a latency time of 50 usecs and
wanting to handle a batch of 8 requests at the time. If it uses 50 usecs
as the timeout, then it'll be doing 20K context switches per second even
if nothing is happening.
This introduces the notion of min batch wait time. If the min batch wait
time expires, then we'll return to userspace if we have any events at all.
If none are available, the general wait time is applied. Any request
arriving after the min batch wait time will cause waiting to stop and
return control to the application.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2024-01-04 17:17:54 +00:00
|
|
|
if (iowq->min_timeout) {
|
|
|
|
timeout = ktime_add_ns(iowq->min_timeout, start_time);
|
|
|
|
iowq->t.function = io_cqring_min_timer_wakeup;
|
|
|
|
} else {
|
|
|
|
timeout = iowq->timeout;
|
|
|
|
iowq->t.function = io_cqring_timer_wakeup;
|
|
|
|
}
|
|
|
|
|
|
|
|
hrtimer_set_expires_range_ns(&iowq->t, timeout, 0);
|
2024-01-04 15:46:23 +00:00
|
|
|
hrtimer_start_expires(&iowq->t, HRTIMER_MODE_ABS);
|
|
|
|
|
|
|
|
if (!READ_ONCE(iowq->hit_timeout))
|
|
|
|
schedule();
|
|
|
|
|
|
|
|
hrtimer_cancel(&iowq->t);
|
|
|
|
destroy_hrtimer_on_stack(&iowq->t);
|
|
|
|
__set_current_state(TASK_RUNNING);
|
|
|
|
|
|
|
|
return READ_ONCE(iowq->hit_timeout) ? -ETIME : 0;
|
|
|
|
}
|
|
|
|
|
2024-01-04 15:02:59 +00:00
|
|
|
static int __io_cqring_wait_schedule(struct io_ring_ctx *ctx,
|
io_uring: add support for batch wait timeout
Waiting for events with io_uring has two knobs that can be set:
1) The number of events to wake for
2) The timeout associated with the event
Waiting will abort when either of those conditions are met, as expected.
This adds support for a third event, which is associated with the number
of events to wait for. Applications generally like to handle batches of
completions, and right now they'd set a number of events to wait for and
the timeout for that. If no events have been received but the timeout
triggers, control is returned to the application and it can wait again.
However, if the application doesn't have anything to do until events are
reaped, then it's possible to make this waiting more efficient.
For example, the application may have a latency time of 50 usecs and
wanting to handle a batch of 8 requests at the time. If it uses 50 usecs
as the timeout, then it'll be doing 20K context switches per second even
if nothing is happening.
This introduces the notion of min batch wait time. If the min batch wait
time expires, then we'll return to userspace if we have any events at all.
If none are available, the general wait time is applied. Any request
arriving after the min batch wait time will cause waiting to stop and
return control to the application.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2024-01-04 17:17:54 +00:00
|
|
|
struct io_wait_queue *iowq,
|
|
|
|
ktime_t start_time)
|
2021-02-04 13:51:58 +00:00
|
|
|
{
|
2024-01-04 15:02:59 +00:00
|
|
|
int ret = 0;
|
2023-07-07 16:20:07 +00:00
|
|
|
|
|
|
|
/*
|
2023-07-24 17:28:17 +00:00
|
|
|
* Mark us as being in io_wait if we have pending requests, so cpufreq
|
|
|
|
* can take into account that the task is waiting for IO - turns out
|
|
|
|
* to be important for low QD IO.
|
2023-07-07 16:20:07 +00:00
|
|
|
*/
|
2023-07-24 17:28:17 +00:00
|
|
|
if (current_pending_io())
|
|
|
|
current->in_iowait = 1;
|
io_uring: add support for batch wait timeout
Waiting for events with io_uring has two knobs that can be set:
1) The number of events to wake for
2) The timeout associated with the event
Waiting will abort when either of those conditions are met, as expected.
This adds support for a third event, which is associated with the number
of events to wait for. Applications generally like to handle batches of
completions, and right now they'd set a number of events to wait for and
the timeout for that. If no events have been received but the timeout
triggers, control is returned to the application and it can wait again.
However, if the application doesn't have anything to do until events are
reaped, then it's possible to make this waiting more efficient.
For example, the application may have a latency time of 50 usecs and
wanting to handle a batch of 8 requests at the time. If it uses 50 usecs
as the timeout, then it'll be doing 20K context switches per second even
if nothing is happening.
This introduces the notion of min batch wait time. If the min batch wait
time expires, then we'll return to userspace if we have any events at all.
If none are available, the general wait time is applied. Any request
arriving after the min batch wait time will cause waiting to stop and
return control to the application.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2024-01-04 17:17:54 +00:00
|
|
|
if (iowq->timeout != KTIME_MAX || iowq->min_timeout)
|
|
|
|
ret = io_cqring_schedule_timeout(iowq, ctx->clockid, start_time);
|
2024-01-04 15:46:23 +00:00
|
|
|
else
|
2023-01-05 11:22:28 +00:00
|
|
|
schedule();
|
2024-03-11 19:30:43 +00:00
|
|
|
current->in_iowait = 0;
|
2023-07-07 16:20:07 +00:00
|
|
|
return ret;
|
2021-02-04 13:51:58 +00:00
|
|
|
}
|
|
|
|
|
2024-01-04 15:02:59 +00:00
|
|
|
/* If this returns > 0, the caller should retry */
|
|
|
|
static inline int io_cqring_wait_schedule(struct io_ring_ctx *ctx,
|
io_uring: add support for batch wait timeout
Waiting for events with io_uring has two knobs that can be set:
1) The number of events to wake for
2) The timeout associated with the event
Waiting will abort when either of those conditions are met, as expected.
This adds support for a third event, which is associated with the number
of events to wait for. Applications generally like to handle batches of
completions, and right now they'd set a number of events to wait for and
the timeout for that. If no events have been received but the timeout
triggers, control is returned to the application and it can wait again.
However, if the application doesn't have anything to do until events are
reaped, then it's possible to make this waiting more efficient.
For example, the application may have a latency time of 50 usecs and
wanting to handle a batch of 8 requests at the time. If it uses 50 usecs
as the timeout, then it'll be doing 20K context switches per second even
if nothing is happening.
This introduces the notion of min batch wait time. If the min batch wait
time expires, then we'll return to userspace if we have any events at all.
If none are available, the general wait time is applied. Any request
arriving after the min batch wait time will cause waiting to stop and
return control to the application.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2024-01-04 17:17:54 +00:00
|
|
|
struct io_wait_queue *iowq,
|
|
|
|
ktime_t start_time)
|
2024-01-04 15:02:59 +00:00
|
|
|
{
|
|
|
|
if (unlikely(READ_ONCE(ctx->check_cq)))
|
|
|
|
return 1;
|
|
|
|
if (unlikely(!llist_empty(&ctx->work_llist)))
|
|
|
|
return 1;
|
|
|
|
if (unlikely(test_thread_flag(TIF_NOTIFY_SIGNAL)))
|
|
|
|
return 1;
|
|
|
|
if (unlikely(task_sigpending(current)))
|
|
|
|
return -EINTR;
|
|
|
|
if (unlikely(io_should_wake(iowq)))
|
|
|
|
return 0;
|
|
|
|
|
io_uring: add support for batch wait timeout
Waiting for events with io_uring has two knobs that can be set:
1) The number of events to wake for
2) The timeout associated with the event
Waiting will abort when either of those conditions are met, as expected.
This adds support for a third event, which is associated with the number
of events to wait for. Applications generally like to handle batches of
completions, and right now they'd set a number of events to wait for and
the timeout for that. If no events have been received but the timeout
triggers, control is returned to the application and it can wait again.
However, if the application doesn't have anything to do until events are
reaped, then it's possible to make this waiting more efficient.
For example, the application may have a latency time of 50 usecs and
wanting to handle a batch of 8 requests at the time. If it uses 50 usecs
as the timeout, then it'll be doing 20K context switches per second even
if nothing is happening.
This introduces the notion of min batch wait time. If the min batch wait
time expires, then we'll return to userspace if we have any events at all.
If none are available, the general wait time is applied. Any request
arriving after the min batch wait time will cause waiting to stop and
return control to the application.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2024-01-04 17:17:54 +00:00
|
|
|
return __io_cqring_wait_schedule(ctx, iowq, start_time);
|
2024-01-04 15:02:59 +00:00
|
|
|
}
|
|
|
|
|
2024-08-15 22:13:32 +00:00
|
|
|
struct ext_arg {
|
|
|
|
size_t argsz;
|
|
|
|
struct __kernel_timespec __user *ts;
|
|
|
|
const sigset_t __user *sig;
|
2024-01-04 17:46:30 +00:00
|
|
|
ktime_t min_time;
|
2024-08-15 22:13:32 +00:00
|
|
|
};
|
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
/*
|
|
|
|
* Wait until events become available, if we don't already have some. The
|
|
|
|
* application must reap them itself, as they reside on the shared cq ring.
|
|
|
|
*/
|
2024-08-07 14:18:13 +00:00
|
|
|
static int io_cqring_wait(struct io_ring_ctx *ctx, int min_events, u32 flags,
|
2024-08-15 22:13:32 +00:00
|
|
|
struct ext_arg *ext_arg)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
{
|
2021-08-09 15:07:32 +00:00
|
|
|
struct io_wait_queue iowq;
|
2019-08-26 17:23:46 +00:00
|
|
|
struct io_rings *rings = ctx->rings;
|
io_uring: add support for batch wait timeout
Waiting for events with io_uring has two knobs that can be set:
1) The number of events to wake for
2) The timeout associated with the event
Waiting will abort when either of those conditions are met, as expected.
This adds support for a third event, which is associated with the number
of events to wait for. Applications generally like to handle batches of
completions, and right now they'd set a number of events to wait for and
the timeout for that. If no events have been received but the timeout
triggers, control is returned to the application and it can wait again.
However, if the application doesn't have anything to do until events are
reaped, then it's possible to make this waiting more efficient.
For example, the application may have a latency time of 50 usecs and
wanting to handle a batch of 8 requests at the time. If it uses 50 usecs
as the timeout, then it'll be doing 20K context switches per second even
if nothing is happening.
This introduces the notion of min batch wait time. If the min batch wait
time expires, then we'll return to userspace if we have any events at all.
If none are available, the general wait time is applied. Any request
arriving after the min batch wait time will cause waiting to stop and
return control to the application.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2024-01-04 17:17:54 +00:00
|
|
|
ktime_t start_time;
|
2021-02-04 13:51:57 +00:00
|
|
|
int ret;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
2022-09-08 15:56:52 +00:00
|
|
|
if (!io_allowed_run_tw(ctx))
|
|
|
|
return -EEXIST;
|
2023-01-05 11:22:23 +00:00
|
|
|
if (!llist_empty(&ctx->work_llist))
|
2024-01-31 17:50:08 +00:00
|
|
|
io_run_local_work(ctx, min_events);
|
2023-01-05 11:22:21 +00:00
|
|
|
io_run_task_work();
|
2024-04-10 01:26:53 +00:00
|
|
|
|
|
|
|
if (unlikely(test_bit(IO_CHECK_CQ_OVERFLOW_BIT, &ctx->check_cq)))
|
|
|
|
io_cqring_do_overflow_flush(ctx);
|
2023-01-05 11:22:21 +00:00
|
|
|
if (__io_cqring_events_user(ctx) >= min_events)
|
|
|
|
return 0;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
2021-08-09 15:07:32 +00:00
|
|
|
init_waitqueue_func_entry(&iowq.wq, io_wake_function);
|
|
|
|
iowq.wq.private = current;
|
|
|
|
INIT_LIST_HEAD(&iowq.wq.entry);
|
|
|
|
iowq.ctx = ctx;
|
2021-08-06 20:04:31 +00:00
|
|
|
iowq.cq_tail = READ_ONCE(ctx->rings->cq.head) + min_events;
|
io_uring: add support for batch wait timeout
Waiting for events with io_uring has two knobs that can be set:
1) The number of events to wake for
2) The timeout associated with the event
Waiting will abort when either of those conditions are met, as expected.
This adds support for a third event, which is associated with the number
of events to wait for. Applications generally like to handle batches of
completions, and right now they'd set a number of events to wait for and
the timeout for that. If no events have been received but the timeout
triggers, control is returned to the application and it can wait again.
However, if the application doesn't have anything to do until events are
reaped, then it's possible to make this waiting more efficient.
For example, the application may have a latency time of 50 usecs and
wanting to handle a batch of 8 requests at the time. If it uses 50 usecs
as the timeout, then it'll be doing 20K context switches per second even
if nothing is happening.
This introduces the notion of min batch wait time. If the min batch wait
time expires, then we'll return to userspace if we have any events at all.
If none are available, the general wait time is applied. Any request
arriving after the min batch wait time will cause waiting to stop and
return control to the application.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2024-01-04 17:17:54 +00:00
|
|
|
iowq.cq_min_tail = READ_ONCE(ctx->rings->cq.tail);
|
|
|
|
iowq.nr_timeouts = atomic_read(&ctx->cq_timeouts);
|
|
|
|
iowq.hit_timeout = 0;
|
2024-01-04 17:46:30 +00:00
|
|
|
iowq.min_timeout = ext_arg->min_time;
|
2023-01-05 11:22:29 +00:00
|
|
|
iowq.timeout = KTIME_MAX;
|
io_uring: add support for batch wait timeout
Waiting for events with io_uring has two knobs that can be set:
1) The number of events to wake for
2) The timeout associated with the event
Waiting will abort when either of those conditions are met, as expected.
This adds support for a third event, which is associated with the number
of events to wait for. Applications generally like to handle batches of
completions, and right now they'd set a number of events to wait for and
the timeout for that. If no events have been received but the timeout
triggers, control is returned to the application and it can wait again.
However, if the application doesn't have anything to do until events are
reaped, then it's possible to make this waiting more efficient.
For example, the application may have a latency time of 50 usecs and
wanting to handle a batch of 8 requests at the time. If it uses 50 usecs
as the timeout, then it'll be doing 20K context switches per second even
if nothing is happening.
This introduces the notion of min batch wait time. If the min batch wait
time expires, then we'll return to userspace if we have any events at all.
If none are available, the general wait time is applied. Any request
arriving after the min batch wait time will cause waiting to stop and
return control to the application.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2024-01-04 17:17:54 +00:00
|
|
|
start_time = io_get_time(ctx);
|
2023-01-05 11:22:29 +00:00
|
|
|
|
2024-08-15 22:13:32 +00:00
|
|
|
if (ext_arg->ts) {
|
2023-01-05 11:22:29 +00:00
|
|
|
struct timespec64 ts;
|
|
|
|
|
2024-08-15 22:13:32 +00:00
|
|
|
if (get_timespec64(&ts, ext_arg->ts))
|
2023-01-05 11:22:29 +00:00
|
|
|
return -EFAULT;
|
2023-06-08 16:38:36 +00:00
|
|
|
|
2024-08-07 14:18:13 +00:00
|
|
|
iowq.timeout = timespec64_to_ktime(ts);
|
|
|
|
if (!(flags & IORING_ENTER_ABS_TIMER))
|
io_uring: add support for batch wait timeout
Waiting for events with io_uring has two knobs that can be set:
1) The number of events to wake for
2) The timeout associated with the event
Waiting will abort when either of those conditions are met, as expected.
This adds support for a third event, which is associated with the number
of events to wait for. Applications generally like to handle batches of
completions, and right now they'd set a number of events to wait for and
the timeout for that. If no events have been received but the timeout
triggers, control is returned to the application and it can wait again.
However, if the application doesn't have anything to do until events are
reaped, then it's possible to make this waiting more efficient.
For example, the application may have a latency time of 50 usecs and
wanting to handle a batch of 8 requests at the time. If it uses 50 usecs
as the timeout, then it'll be doing 20K context switches per second even
if nothing is happening.
This introduces the notion of min batch wait time. If the min batch wait
time expires, then we'll return to userspace if we have any events at all.
If none are available, the general wait time is applied. Any request
arriving after the min batch wait time will cause waiting to stop and
return control to the application.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2024-01-04 17:17:54 +00:00
|
|
|
iowq.timeout = ktime_add(iowq.timeout, start_time);
|
2023-01-05 11:22:29 +00:00
|
|
|
}
|
2021-08-09 15:07:32 +00:00
|
|
|
|
2024-08-15 22:13:32 +00:00
|
|
|
if (ext_arg->sig) {
|
2024-04-05 12:55:51 +00:00
|
|
|
#ifdef CONFIG_COMPAT
|
|
|
|
if (in_compat_syscall())
|
2024-08-15 22:13:32 +00:00
|
|
|
ret = set_compat_user_sigmask((const compat_sigset_t __user *)ext_arg->sig,
|
|
|
|
ext_arg->argsz);
|
2024-04-05 12:55:51 +00:00
|
|
|
else
|
|
|
|
#endif
|
2024-08-15 22:13:32 +00:00
|
|
|
ret = set_user_sigmask(ext_arg->sig, ext_arg->argsz);
|
2024-04-05 12:55:51 +00:00
|
|
|
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2023-06-08 16:38:36 +00:00
|
|
|
io_napi_busy_loop(ctx, &iowq);
|
|
|
|
|
io_uring: add set of tracing events
To trace io_uring activity one can get an information from workqueue and
io trace events, but looks like some parts could be hard to identify via
this approach. Making what happens inside io_uring more transparent is
important to be able to reason about many aspects of it, hence introduce
the set of tracing events.
All such events could be roughly divided into two categories:
* those, that are helping to understand correctness (from both kernel
and an application point of view). E.g. a ring creation, file
registration, or waiting for available CQE. Proposed approach is to
get a pointer to an original structure of interest (ring context, or
request), and then find relevant events. io_uring_queue_async_work
also exposes a pointer to work_struct, to be able to track down
corresponding workqueue events.
* those, that provide performance related information. Mostly it's about
events that change the flow of requests, e.g. whether an async work
was queued, or delayed due to some dependencies. Another important
case is how io_uring optimizations (e.g. registered files) are
utilized.
Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-10-15 17:02:01 +00:00
|
|
|
trace_io_uring_cqring_wait(ctx, min_events);
|
2019-09-24 19:47:15 +00:00
|
|
|
do {
|
2023-01-05 11:22:24 +00:00
|
|
|
unsigned long check_cq;
|
io_uring: add support for batch wait timeout
Waiting for events with io_uring has two knobs that can be set:
1) The number of events to wake for
2) The timeout associated with the event
Waiting will abort when either of those conditions are met, as expected.
This adds support for a third event, which is associated with the number
of events to wait for. Applications generally like to handle batches of
completions, and right now they'd set a number of events to wait for and
the timeout for that. If no events have been received but the timeout
triggers, control is returned to the application and it can wait again.
However, if the application doesn't have anything to do until events are
reaped, then it's possible to make this waiting more efficient.
For example, the application may have a latency time of 50 usecs and
wanting to handle a batch of 8 requests at the time. If it uses 50 usecs
as the timeout, then it'll be doing 20K context switches per second even
if nothing is happening.
This introduces the notion of min batch wait time. If the min batch wait
time expires, then we'll return to userspace if we have any events at all.
If none are available, the general wait time is applied. Any request
arriving after the min batch wait time will cause waiting to stop and
return control to the application.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2024-01-04 17:17:54 +00:00
|
|
|
int nr_wait;
|
|
|
|
|
|
|
|
/* if min timeout has been hit, don't reset wait count */
|
|
|
|
if (!iowq.hit_timeout)
|
|
|
|
nr_wait = (int) iowq.cq_tail -
|
|
|
|
READ_ONCE(ctx->rings->cq.tail);
|
|
|
|
else
|
|
|
|
nr_wait = 1;
|
2023-01-05 11:22:24 +00:00
|
|
|
|
2023-01-09 14:46:11 +00:00
|
|
|
if (ctx->flags & IORING_SETUP_DEFER_TASKRUN) {
|
2023-04-06 13:20:12 +00:00
|
|
|
atomic_set(&ctx->cq_wait_nr, nr_wait);
|
2023-01-09 14:46:11 +00:00
|
|
|
set_current_state(TASK_INTERRUPTIBLE);
|
|
|
|
} else {
|
|
|
|
prepare_to_wait_exclusive(&ctx->cq_wait, &iowq.wq,
|
|
|
|
TASK_INTERRUPTIBLE);
|
|
|
|
}
|
|
|
|
|
io_uring: add support for batch wait timeout
Waiting for events with io_uring has two knobs that can be set:
1) The number of events to wake for
2) The timeout associated with the event
Waiting will abort when either of those conditions are met, as expected.
This adds support for a third event, which is associated with the number
of events to wait for. Applications generally like to handle batches of
completions, and right now they'd set a number of events to wait for and
the timeout for that. If no events have been received but the timeout
triggers, control is returned to the application and it can wait again.
However, if the application doesn't have anything to do until events are
reaped, then it's possible to make this waiting more efficient.
For example, the application may have a latency time of 50 usecs and
wanting to handle a batch of 8 requests at the time. If it uses 50 usecs
as the timeout, then it'll be doing 20K context switches per second even
if nothing is happening.
This introduces the notion of min batch wait time. If the min batch wait
time expires, then we'll return to userspace if we have any events at all.
If none are available, the general wait time is applied. Any request
arriving after the min batch wait time will cause waiting to stop and
return control to the application.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2024-01-04 17:17:54 +00:00
|
|
|
ret = io_cqring_wait_schedule(ctx, &iowq, start_time);
|
2023-01-09 14:46:11 +00:00
|
|
|
__set_current_state(TASK_RUNNING);
|
2024-01-17 00:57:29 +00:00
|
|
|
atomic_set(&ctx->cq_wait_nr, IO_CQ_WAKE_INIT);
|
2023-01-09 14:46:12 +00:00
|
|
|
|
2023-01-05 11:22:25 +00:00
|
|
|
/*
|
|
|
|
* Run task_work after scheduling and before io_should_wake().
|
|
|
|
* If we got woken because of task_work being processed, run it
|
|
|
|
* now rather than let the caller do another wait loop.
|
|
|
|
*/
|
|
|
|
io_run_task_work();
|
|
|
|
if (!llist_empty(&ctx->work_llist))
|
2024-01-31 17:50:08 +00:00
|
|
|
io_run_local_work(ctx, nr_wait);
|
2023-01-05 11:22:24 +00:00
|
|
|
|
2024-01-04 19:21:08 +00:00
|
|
|
/*
|
|
|
|
* Non-local task_work will be run on exit to userspace, but
|
|
|
|
* if we're using DEFER_TASKRUN, then we could have waited
|
|
|
|
* with a timeout for a number of requests. If the timeout
|
|
|
|
* hits, we could have some requests ready to process. Ensure
|
|
|
|
* this break is _after_ we have run task_work, to avoid
|
|
|
|
* deferring running potentially pending requests until the
|
|
|
|
* next time we wait for events.
|
|
|
|
*/
|
|
|
|
if (ret < 0)
|
|
|
|
break;
|
|
|
|
|
2023-01-05 11:22:24 +00:00
|
|
|
check_cq = READ_ONCE(ctx->check_cq);
|
|
|
|
if (unlikely(check_cq)) {
|
|
|
|
/* let the caller flush overflows, retry */
|
2023-01-05 11:22:27 +00:00
|
|
|
if (check_cq & BIT(IO_CHECK_CQ_OVERFLOW_BIT))
|
2023-01-05 11:22:24 +00:00
|
|
|
io_cqring_do_overflow_flush(ctx);
|
|
|
|
if (check_cq & BIT(IO_CHECK_CQ_DROPPED_BIT)) {
|
|
|
|
ret = -EBADR;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2023-01-05 11:22:25 +00:00
|
|
|
if (io_should_wake(&iowq)) {
|
|
|
|
ret = 0;
|
2022-12-17 20:42:24 +00:00
|
|
|
break;
|
2023-01-05 11:22:25 +00:00
|
|
|
}
|
2021-03-05 00:15:48 +00:00
|
|
|
cond_resched();
|
2023-01-05 11:22:25 +00:00
|
|
|
} while (1);
|
2019-09-24 19:47:15 +00:00
|
|
|
|
2023-01-09 14:46:11 +00:00
|
|
|
if (!(ctx->flags & IORING_SETUP_DEFER_TASKRUN))
|
|
|
|
finish_wait(&ctx->cq_wait, &iowq.wq);
|
2020-07-04 14:55:50 +00:00
|
|
|
restore_saved_sigmask_unless(ret == -EINTR);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
2019-08-26 17:23:46 +00:00
|
|
|
return READ_ONCE(rings->cq.head) == READ_ONCE(rings->cq.tail) ? ret : 0;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
}
|
|
|
|
|
io_uring: support for user allocated memory for rings/sqes
Currently io_uring applications must call mmap(2) twice to map the rings
themselves, and the sqes array. This works fine, but it does not support
using huge pages to back the rings/sqes.
Provide a way for the application to pass in pre-allocated memory for
the rings/sqes, which can then suitably be allocated from shmfs or
via mmap to get huge page support.
Particularly for larger rings, this reduces the TLBs needed.
If an application wishes to take advantage of that, it must pre-allocate
the memory needed for the sq/cq ring, and the sqes. The former must
be passed in via the io_uring_params->cq_off.user_data field, while the
latter is passed in via the io_uring_params->sq_off.user_data field. Then
it must set IORING_SETUP_NO_MMAP in the io_uring_params->flags field,
and io_uring will then map the existing memory into the kernel for shared
use. The application must not call mmap(2) to map rings as it otherwise
would have, that will now fail with -EINVAL if this setup flag was used.
The pages used for the rings and sqes must be contigious. The intent here
is clearly that huge pages should be used, otherwise the normal setup
procedure works fine as-is. The application may use one huge page for
both the rings and sqes.
Outside of those initialization changes, everything works like it did
before.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-11-05 23:20:54 +00:00
|
|
|
static void *io_rings_map(struct io_ring_ctx *ctx, unsigned long uaddr,
|
|
|
|
size_t size)
|
|
|
|
{
|
|
|
|
return __io_uaddr_map(&ctx->ring_pages, &ctx->n_ring_pages, uaddr,
|
|
|
|
size);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void *io_sqes_map(struct io_ring_ctx *ctx, unsigned long uaddr,
|
|
|
|
size_t size)
|
|
|
|
{
|
|
|
|
return __io_uaddr_map(&ctx->sqe_pages, &ctx->n_sqe_pages, uaddr,
|
|
|
|
size);
|
|
|
|
}
|
|
|
|
|
2021-11-05 23:15:46 +00:00
|
|
|
static void io_rings_free(struct io_ring_ctx *ctx)
|
|
|
|
{
|
io_uring: support for user allocated memory for rings/sqes
Currently io_uring applications must call mmap(2) twice to map the rings
themselves, and the sqes array. This works fine, but it does not support
using huge pages to back the rings/sqes.
Provide a way for the application to pass in pre-allocated memory for
the rings/sqes, which can then suitably be allocated from shmfs or
via mmap to get huge page support.
Particularly for larger rings, this reduces the TLBs needed.
If an application wishes to take advantage of that, it must pre-allocate
the memory needed for the sq/cq ring, and the sqes. The former must
be passed in via the io_uring_params->cq_off.user_data field, while the
latter is passed in via the io_uring_params->sq_off.user_data field. Then
it must set IORING_SETUP_NO_MMAP in the io_uring_params->flags field,
and io_uring will then map the existing memory into the kernel for shared
use. The application must not call mmap(2) to map rings as it otherwise
would have, that will now fail with -EINVAL if this setup flag was used.
The pages used for the rings and sqes must be contigious. The intent here
is clearly that huge pages should be used, otherwise the normal setup
procedure works fine as-is. The application may use one huge page for
both the rings and sqes.
Outside of those initialization changes, everything works like it did
before.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-11-05 23:20:54 +00:00
|
|
|
if (!(ctx->flags & IORING_SETUP_NO_MMAP)) {
|
2024-03-13 02:24:21 +00:00
|
|
|
io_pages_unmap(ctx->rings, &ctx->ring_pages, &ctx->n_ring_pages,
|
|
|
|
true);
|
|
|
|
io_pages_unmap(ctx->sq_sqes, &ctx->sqe_pages, &ctx->n_sqe_pages,
|
|
|
|
true);
|
io_uring: support for user allocated memory for rings/sqes
Currently io_uring applications must call mmap(2) twice to map the rings
themselves, and the sqes array. This works fine, but it does not support
using huge pages to back the rings/sqes.
Provide a way for the application to pass in pre-allocated memory for
the rings/sqes, which can then suitably be allocated from shmfs or
via mmap to get huge page support.
Particularly for larger rings, this reduces the TLBs needed.
If an application wishes to take advantage of that, it must pre-allocate
the memory needed for the sq/cq ring, and the sqes. The former must
be passed in via the io_uring_params->cq_off.user_data field, while the
latter is passed in via the io_uring_params->sq_off.user_data field. Then
it must set IORING_SETUP_NO_MMAP in the io_uring_params->flags field,
and io_uring will then map the existing memory into the kernel for shared
use. The application must not call mmap(2) to map rings as it otherwise
would have, that will now fail with -EINVAL if this setup flag was used.
The pages used for the rings and sqes must be contigious. The intent here
is clearly that huge pages should be used, otherwise the normal setup
procedure works fine as-is. The application may use one huge page for
both the rings and sqes.
Outside of those initialization changes, everything works like it did
before.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-11-05 23:20:54 +00:00
|
|
|
} else {
|
|
|
|
io_pages_free(&ctx->ring_pages, ctx->n_ring_pages);
|
2023-10-18 14:09:27 +00:00
|
|
|
ctx->n_ring_pages = 0;
|
io_uring: support for user allocated memory for rings/sqes
Currently io_uring applications must call mmap(2) twice to map the rings
themselves, and the sqes array. This works fine, but it does not support
using huge pages to back the rings/sqes.
Provide a way for the application to pass in pre-allocated memory for
the rings/sqes, which can then suitably be allocated from shmfs or
via mmap to get huge page support.
Particularly for larger rings, this reduces the TLBs needed.
If an application wishes to take advantage of that, it must pre-allocate
the memory needed for the sq/cq ring, and the sqes. The former must
be passed in via the io_uring_params->cq_off.user_data field, while the
latter is passed in via the io_uring_params->sq_off.user_data field. Then
it must set IORING_SETUP_NO_MMAP in the io_uring_params->flags field,
and io_uring will then map the existing memory into the kernel for shared
use. The application must not call mmap(2) to map rings as it otherwise
would have, that will now fail with -EINVAL if this setup flag was used.
The pages used for the rings and sqes must be contigious. The intent here
is clearly that huge pages should be used, otherwise the normal setup
procedure works fine as-is. The application may use one huge page for
both the rings and sqes.
Outside of those initialization changes, everything works like it did
before.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-11-05 23:20:54 +00:00
|
|
|
io_pages_free(&ctx->sqe_pages, ctx->n_sqe_pages);
|
2023-10-18 14:09:27 +00:00
|
|
|
ctx->n_sqe_pages = 0;
|
2024-03-13 20:10:40 +00:00
|
|
|
vunmap(ctx->rings);
|
|
|
|
vunmap(ctx->sq_sqes);
|
io_uring: support for user allocated memory for rings/sqes
Currently io_uring applications must call mmap(2) twice to map the rings
themselves, and the sqes array. This works fine, but it does not support
using huge pages to back the rings/sqes.
Provide a way for the application to pass in pre-allocated memory for
the rings/sqes, which can then suitably be allocated from shmfs or
via mmap to get huge page support.
Particularly for larger rings, this reduces the TLBs needed.
If an application wishes to take advantage of that, it must pre-allocate
the memory needed for the sq/cq ring, and the sqes. The former must
be passed in via the io_uring_params->cq_off.user_data field, while the
latter is passed in via the io_uring_params->sq_off.user_data field. Then
it must set IORING_SETUP_NO_MMAP in the io_uring_params->flags field,
and io_uring will then map the existing memory into the kernel for shared
use. The application must not call mmap(2) to map rings as it otherwise
would have, that will now fail with -EINVAL if this setup flag was used.
The pages used for the rings and sqes must be contigious. The intent here
is clearly that huge pages should be used, otherwise the normal setup
procedure works fine as-is. The application may use one huge page for
both the rings and sqes.
Outside of those initialization changes, everything works like it did
before.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-11-05 23:20:54 +00:00
|
|
|
}
|
2024-03-12 14:56:27 +00:00
|
|
|
|
|
|
|
ctx->rings = NULL;
|
|
|
|
ctx->sq_sqes = NULL;
|
2021-11-05 23:15:46 +00:00
|
|
|
}
|
|
|
|
|
2022-06-13 13:12:45 +00:00
|
|
|
static unsigned long rings_size(struct io_ring_ctx *ctx, unsigned int sq_entries,
|
|
|
|
unsigned int cq_entries, size_t *sq_offset)
|
2019-01-11 05:13:58 +00:00
|
|
|
{
|
2022-06-13 13:12:45 +00:00
|
|
|
struct io_rings *rings;
|
|
|
|
size_t off, sq_array_size;
|
2019-01-11 05:13:58 +00:00
|
|
|
|
2022-06-13 13:12:45 +00:00
|
|
|
off = struct_size(rings, cqes, cq_entries);
|
|
|
|
if (off == SIZE_MAX)
|
|
|
|
return SIZE_MAX;
|
|
|
|
if (ctx->flags & IORING_SETUP_CQE32) {
|
|
|
|
if (check_shl_overflow(off, 1, &off))
|
|
|
|
return SIZE_MAX;
|
|
|
|
}
|
2021-10-09 22:14:41 +00:00
|
|
|
|
2022-06-13 13:12:45 +00:00
|
|
|
#ifdef CONFIG_SMP
|
|
|
|
off = ALIGN(off, SMP_CACHE_BYTES);
|
|
|
|
if (off == 0)
|
|
|
|
return SIZE_MAX;
|
|
|
|
#endif
|
2021-04-01 14:43:43 +00:00
|
|
|
|
2023-08-24 22:53:32 +00:00
|
|
|
if (ctx->flags & IORING_SETUP_NO_SQARRAY) {
|
2024-05-22 17:13:44 +00:00
|
|
|
*sq_offset = SIZE_MAX;
|
2023-08-24 22:53:32 +00:00
|
|
|
return off;
|
|
|
|
}
|
|
|
|
|
2024-05-22 17:13:44 +00:00
|
|
|
*sq_offset = off;
|
2021-04-01 14:43:43 +00:00
|
|
|
|
2022-06-13 13:12:45 +00:00
|
|
|
sq_array_size = array_size(sizeof(u32), sq_entries);
|
|
|
|
if (sq_array_size == SIZE_MAX)
|
|
|
|
return SIZE_MAX;
|
2019-01-11 05:13:58 +00:00
|
|
|
|
2022-06-13 13:12:45 +00:00
|
|
|
if (check_add_overflow(off, sq_array_size, &off))
|
|
|
|
return SIZE_MAX;
|
2021-02-19 09:19:36 +00:00
|
|
|
|
2022-06-13 13:12:45 +00:00
|
|
|
return off;
|
2021-02-19 09:19:36 +00:00
|
|
|
}
|
|
|
|
|
2022-06-13 13:12:45 +00:00
|
|
|
static void io_req_caches_free(struct io_ring_ctx *ctx)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
{
|
2023-01-23 14:37:16 +00:00
|
|
|
struct io_kiocb *req;
|
2021-10-04 19:02:53 +00:00
|
|
|
int nr = 0;
|
2021-02-10 00:03:17 +00:00
|
|
|
|
2021-02-13 16:09:44 +00:00
|
|
|
mutex_lock(&ctx->uring_lock);
|
|
|
|
|
2022-04-12 14:09:47 +00:00
|
|
|
while (!io_req_cache_empty(ctx)) {
|
2023-01-23 14:37:16 +00:00
|
|
|
req = io_extract_req(ctx);
|
2021-09-24 20:59:47 +00:00
|
|
|
kmem_cache_free(req_cachep, req);
|
2021-10-04 19:02:53 +00:00
|
|
|
nr++;
|
2021-09-24 20:59:47 +00:00
|
|
|
}
|
2021-10-04 19:02:53 +00:00
|
|
|
if (nr)
|
|
|
|
percpu_ref_put_many(&ctx->refs, nr);
|
2021-02-13 16:09:44 +00:00
|
|
|
mutex_unlock(&ctx->uring_lock);
|
|
|
|
}
|
|
|
|
|
2021-10-04 19:02:54 +00:00
|
|
|
static __cold void io_ring_ctx_free(struct io_ring_ctx *ctx)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
{
|
2021-02-18 04:03:43 +00:00
|
|
|
io_sq_thread_finish(ctx);
|
2021-08-10 01:44:23 +00:00
|
|
|
/* __io_rsrc_put_work() may need uring_lock to progress, wait w/o it */
|
2023-04-13 14:28:10 +00:00
|
|
|
if (WARN_ON_ONCE(!list_empty(&ctx->rsrc_ref_list)))
|
|
|
|
return;
|
2021-08-10 01:44:23 +00:00
|
|
|
|
2021-02-19 09:19:36 +00:00
|
|
|
mutex_lock(&ctx->uring_lock);
|
2021-08-10 01:44:23 +00:00
|
|
|
if (ctx->buf_data)
|
2021-04-25 13:32:25 +00:00
|
|
|
__io_sqe_buffers_unregister(ctx);
|
2021-08-10 01:44:23 +00:00
|
|
|
if (ctx->file_data)
|
2021-04-13 01:58:38 +00:00
|
|
|
__io_sqe_files_unregister(ctx);
|
2022-12-07 03:53:28 +00:00
|
|
|
io_cqring_overflow_kill(ctx);
|
2019-04-11 17:45:41 +00:00
|
|
|
io_eventfd_unregister(ctx);
|
2024-03-20 21:19:44 +00:00
|
|
|
io_alloc_cache_free(&ctx->apoll_cache, kfree);
|
2022-07-07 20:30:09 +00:00
|
|
|
io_alloc_cache_free(&ctx->netmsg_cache, io_netmsg_cache_free);
|
2024-03-18 22:13:01 +00:00
|
|
|
io_alloc_cache_free(&ctx->rw_cache, io_rw_cache_free);
|
2024-03-20 21:19:44 +00:00
|
|
|
io_alloc_cache_free(&ctx->uring_cache, kfree);
|
2024-06-06 18:25:01 +00:00
|
|
|
io_alloc_cache_free(&ctx->msg_cache, io_msg_cache_free);
|
io_uring: add support for futex wake and wait
Add support for FUTEX_WAKE/WAIT primitives.
IORING_OP_FUTEX_WAKE is mix of FUTEX_WAKE and FUTEX_WAKE_BITSET, as
it does support passing in a bitset.
Similary, IORING_OP_FUTEX_WAIT is a mix of FUTEX_WAIT and
FUTEX_WAIT_BITSET.
For both of them, they are using the futex2 interface.
FUTEX_WAKE is straight forward, as those can always be done directly from
the io_uring submission without needing async handling. For FUTEX_WAIT,
things are a bit more complicated. If the futex isn't ready, then we
rely on a callback via futex_queue->wake() when someone wakes up the
futex. From that calback, we queue up task_work with the original task,
which will post a CQE and wake it, if necessary.
Cancelations are supported, both from the application point-of-view,
but also to be able to cancel pending waits if the ring exits before
all events have occurred. The return value of futex_unqueue() is used
to gate who wins the potential race between cancelation and futex
wakeups. Whomever gets a 'ret == 1' return from that claims ownership
of the io_uring futex request.
This is just the barebones wait/wake support. PI or REQUEUE support is
not added at this point, unclear if we might look into that later.
Likewise, explicit timeouts are not supported either. It is expected
that users that need timeouts would do so via the usual io_uring
mechanism to do that using linked timeouts.
The SQE format is as follows:
`addr` Address of futex
`fd` futex2(2) FUTEX2_* flags
`futex_flags` io_uring specific command flags. None valid now.
`addr2` Value of futex
`addr3` Mask to wake/wait
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2023-06-08 17:57:40 +00:00
|
|
|
io_futex_cache_free(ctx);
|
2020-02-23 23:23:11 +00:00
|
|
|
io_destroy_buffers(ctx);
|
2023-04-01 19:50:39 +00:00
|
|
|
mutex_unlock(&ctx->uring_lock);
|
2021-04-20 11:03:32 +00:00
|
|
|
if (ctx->sq_creds)
|
|
|
|
put_cred(ctx->sq_creds);
|
2022-06-16 09:22:08 +00:00
|
|
|
if (ctx->submitter_task)
|
|
|
|
put_task_struct(ctx->submitter_task);
|
2019-01-09 15:59:42 +00:00
|
|
|
|
2021-04-01 14:43:46 +00:00
|
|
|
/* there are no registered resources left, nobody uses it */
|
|
|
|
if (ctx->rsrc_node)
|
2023-04-04 12:39:54 +00:00
|
|
|
io_rsrc_node_destroy(ctx, ctx->rsrc_node);
|
2021-04-01 14:43:46 +00:00
|
|
|
|
|
|
|
WARN_ON_ONCE(!list_empty(&ctx->rsrc_ref_list));
|
2021-08-29 01:54:38 +00:00
|
|
|
WARN_ON_ONCE(!list_empty(&ctx->ltimeout_list));
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
2024-03-20 21:19:44 +00:00
|
|
|
io_alloc_cache_free(&ctx->rsrc_node_cache, kfree);
|
2022-10-04 02:19:08 +00:00
|
|
|
if (ctx->mm_account) {
|
|
|
|
mmdrop(ctx->mm_account);
|
|
|
|
ctx->mm_account = NULL;
|
|
|
|
}
|
2021-11-05 23:15:46 +00:00
|
|
|
io_rings_free(ctx);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
|
|
|
percpu_ref_exit(&ctx->refs);
|
|
|
|
free_uid(ctx->user);
|
2021-02-27 22:04:18 +00:00
|
|
|
io_req_caches_free(ctx);
|
2021-02-19 19:33:30 +00:00
|
|
|
if (ctx->hash_map)
|
|
|
|
io_wq_put_hash(ctx->hash_map);
|
2023-06-08 16:38:36 +00:00
|
|
|
io_napi_free(ctx);
|
2022-06-16 09:22:10 +00:00
|
|
|
kfree(ctx->cancel_table.hbs);
|
2022-06-16 09:22:12 +00:00
|
|
|
kfree(ctx->cancel_table_locked.hbs);
|
2022-05-01 16:52:44 +00:00
|
|
|
xa_destroy(&ctx->io_bl_xa);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
kfree(ctx);
|
|
|
|
}
|
|
|
|
|
2023-01-09 14:46:09 +00:00
|
|
|
static __cold void io_activate_pollwq_cb(struct callback_head *cb)
|
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx = container_of(cb, struct io_ring_ctx,
|
|
|
|
poll_wq_task_work);
|
|
|
|
|
|
|
|
mutex_lock(&ctx->uring_lock);
|
|
|
|
ctx->poll_activated = true;
|
|
|
|
mutex_unlock(&ctx->uring_lock);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Wake ups for some events between start of polling and activation
|
|
|
|
* might've been lost due to loose synchronisation.
|
|
|
|
*/
|
|
|
|
wake_up_all(&ctx->poll_wq);
|
|
|
|
percpu_ref_put(&ctx->refs);
|
|
|
|
}
|
|
|
|
|
2023-12-19 15:54:20 +00:00
|
|
|
__cold void io_activate_pollwq(struct io_ring_ctx *ctx)
|
2023-01-09 14:46:09 +00:00
|
|
|
{
|
|
|
|
spin_lock(&ctx->completion_lock);
|
|
|
|
/* already activated or in progress */
|
|
|
|
if (ctx->poll_activated || ctx->poll_wq_task_work.func)
|
|
|
|
goto out;
|
|
|
|
if (WARN_ON_ONCE(!ctx->task_complete))
|
|
|
|
goto out;
|
|
|
|
if (!ctx->submitter_task)
|
|
|
|
goto out;
|
|
|
|
/*
|
|
|
|
* with ->submitter_task only the submitter task completes requests, we
|
|
|
|
* only need to sync with it, which is done by injecting a tw
|
|
|
|
*/
|
|
|
|
init_task_work(&ctx->poll_wq_task_work, io_activate_pollwq_cb);
|
|
|
|
percpu_ref_get(&ctx->refs);
|
|
|
|
if (task_work_add(ctx->submitter_task, &ctx->poll_wq_task_work, TWA_SIGNAL))
|
|
|
|
percpu_ref_put(&ctx->refs);
|
|
|
|
out:
|
|
|
|
spin_unlock(&ctx->completion_lock);
|
|
|
|
}
|
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
static __poll_t io_uring_poll(struct file *file, poll_table *wait)
|
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx = file->private_data;
|
|
|
|
__poll_t mask = 0;
|
|
|
|
|
2023-01-09 14:46:09 +00:00
|
|
|
if (unlikely(!ctx->poll_activated))
|
|
|
|
io_activate_pollwq(ctx);
|
|
|
|
|
2023-01-09 14:46:08 +00:00
|
|
|
poll_wait(file, &ctx->poll_wq, wait);
|
2019-04-24 21:54:17 +00:00
|
|
|
/*
|
|
|
|
* synchronizes with barrier from wq_has_sleeper call in
|
|
|
|
* io_commit_cqring
|
|
|
|
*/
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
smp_rmb();
|
2020-09-03 18:12:41 +00:00
|
|
|
if (!io_sqring_full(ctx))
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
mask |= EPOLLOUT | EPOLLWRNORM;
|
2021-02-05 08:34:21 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Don't flush cqring overflow list here, just do a simple check.
|
|
|
|
* Otherwise there could possible be ABBA deadlock:
|
|
|
|
* CPU0 CPU1
|
|
|
|
* ---- ----
|
|
|
|
* lock(&ctx->uring_lock);
|
|
|
|
* lock(&ep->mtx);
|
|
|
|
* lock(&ctx->uring_lock);
|
|
|
|
* lock(&ep->mtx);
|
|
|
|
*
|
|
|
|
* Users may get EPOLLIN meanwhile seeing nothing in cqring, this
|
2022-11-25 10:34:11 +00:00
|
|
|
* pushes them to do the flush.
|
2021-02-05 08:34:21 +00:00
|
|
|
*/
|
2022-08-30 12:50:08 +00:00
|
|
|
|
2023-01-23 14:37:13 +00:00
|
|
|
if (__io_cqring_events_user(ctx) || io_has_work(ctx))
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
mask |= EPOLLIN | EPOLLRDNORM;
|
|
|
|
|
|
|
|
return mask;
|
|
|
|
}
|
|
|
|
|
2021-03-06 11:02:13 +00:00
|
|
|
struct io_tctx_exit {
|
|
|
|
struct callback_head task_work;
|
|
|
|
struct completion completion;
|
2021-03-06 11:02:15 +00:00
|
|
|
struct io_ring_ctx *ctx;
|
2021-03-06 11:02:13 +00:00
|
|
|
};
|
|
|
|
|
2021-10-04 19:02:54 +00:00
|
|
|
static __cold void io_tctx_exit_cb(struct callback_head *cb)
|
2021-03-06 11:02:13 +00:00
|
|
|
{
|
|
|
|
struct io_uring_task *tctx = current->io_uring;
|
|
|
|
struct io_tctx_exit *work;
|
|
|
|
|
|
|
|
work = container_of(cb, struct io_tctx_exit, task_work);
|
|
|
|
/*
|
2023-02-17 15:27:23 +00:00
|
|
|
* When @in_cancel, we're in cancellation and it's racy to remove the
|
2021-03-06 11:02:13 +00:00
|
|
|
* node. It'll be removed by the end of cancellation, just ignore it.
|
2022-12-06 09:38:32 +00:00
|
|
|
* tctx can be NULL if the queueing of this task_work raced with
|
|
|
|
* work cancelation off the exec path.
|
2021-03-06 11:02:13 +00:00
|
|
|
*/
|
2023-02-17 15:27:23 +00:00
|
|
|
if (tctx && !atomic_read(&tctx->in_cancel))
|
2021-06-14 01:36:15 +00:00
|
|
|
io_uring_del_tctx_node((unsigned long)work->ctx);
|
2021-03-06 11:02:13 +00:00
|
|
|
complete(&work->completion);
|
|
|
|
}
|
|
|
|
|
2021-10-04 19:02:54 +00:00
|
|
|
static __cold bool io_cancel_ctx_cb(struct io_wq_work *work, void *data)
|
2021-04-25 22:34:45 +00:00
|
|
|
{
|
|
|
|
struct io_kiocb *req = container_of(work, struct io_kiocb, work);
|
|
|
|
|
|
|
|
return req->ctx == data;
|
|
|
|
}
|
|
|
|
|
2021-10-04 19:02:54 +00:00
|
|
|
static __cold void io_ring_exit_work(struct work_struct *work)
|
2020-04-10 00:14:00 +00:00
|
|
|
{
|
2021-03-06 11:02:13 +00:00
|
|
|
struct io_ring_ctx *ctx = container_of(work, struct io_ring_ctx, exit_work);
|
2021-03-06 11:02:16 +00:00
|
|
|
unsigned long timeout = jiffies + HZ * 60 * 5;
|
2021-08-09 12:04:17 +00:00
|
|
|
unsigned long interval = HZ / 20;
|
2021-03-06 11:02:13 +00:00
|
|
|
struct io_tctx_exit exit;
|
|
|
|
struct io_tctx_node *node;
|
|
|
|
int ret;
|
2020-04-10 00:14:00 +00:00
|
|
|
|
2020-06-17 21:00:04 +00:00
|
|
|
/*
|
|
|
|
* If we're doing polled IO and end up having requests being
|
|
|
|
* submitted async (out-of-line), then completions can come in while
|
|
|
|
* we're waiting for refs to drop. We need to reap these manually,
|
|
|
|
* as nobody else will be looking for them.
|
|
|
|
*/
|
2020-07-07 13:36:22 +00:00
|
|
|
do {
|
2022-12-07 03:53:28 +00:00
|
|
|
if (test_bit(IO_CHECK_CQ_OVERFLOW_BIT, &ctx->check_cq)) {
|
|
|
|
mutex_lock(&ctx->uring_lock);
|
|
|
|
io_cqring_overflow_kill(ctx);
|
|
|
|
mutex_unlock(&ctx->uring_lock);
|
|
|
|
}
|
|
|
|
|
2022-08-30 12:50:10 +00:00
|
|
|
if (ctx->flags & IORING_SETUP_DEFER_TASKRUN)
|
|
|
|
io_move_task_work_from_local(ctx);
|
|
|
|
|
2022-06-20 00:25:52 +00:00
|
|
|
while (io_uring_try_cancel_requests(ctx, NULL, true))
|
|
|
|
cond_resched();
|
|
|
|
|
2021-04-25 22:34:45 +00:00
|
|
|
if (ctx->sq_data) {
|
|
|
|
struct io_sq_data *sqd = ctx->sq_data;
|
|
|
|
struct task_struct *tsk;
|
|
|
|
|
|
|
|
io_sq_thread_park(sqd);
|
|
|
|
tsk = sqd->thread;
|
|
|
|
if (tsk && tsk->io_uring && tsk->io_uring->io_wq)
|
|
|
|
io_wq_cancel_cb(tsk->io_uring->io_wq,
|
|
|
|
io_cancel_ctx_cb, ctx, true);
|
|
|
|
io_sq_thread_unpark(sqd);
|
|
|
|
}
|
2021-03-06 11:02:16 +00:00
|
|
|
|
2021-10-04 19:02:53 +00:00
|
|
|
io_req_caches_free(ctx);
|
|
|
|
|
2021-08-09 12:04:17 +00:00
|
|
|
if (WARN_ON_ONCE(time_after(jiffies, timeout))) {
|
|
|
|
/* there is little hope left, don't run it too often */
|
|
|
|
interval = HZ * 60;
|
|
|
|
}
|
io_uring: wait interruptibly for request completions on exit
WHen the ring exits, cleanup is done and the final cancelation and
waiting on completions is done by io_ring_exit_work. That function is
invoked by kworker, which doesn't take any signals. Because of that, it
doesn't really matter if we wait for completions in TASK_INTERRUPTIBLE
or TASK_UNINTERRUPTIBLE state. However, it does matter to the hung task
detection checker!
Normally we expect cancelations and completions to happen rather
quickly. Some test cases, however, will exit the ring and park the
owning task stopped (eg via SIGSTOP). If the owning task needs to run
task_work to complete requests, then io_ring_exit_work won't make any
progress until the task is runnable again. Hence io_ring_exit_work can
trigger the hung task detection, which is particularly problematic if
panic-on-hung-task is enabled.
As the ring exit doesn't take signals to begin with, have it wait
interruptibly rather than uninterruptibly. io_uring has a separate
stuck-exit warning that triggers independently anyway, so we're not
really missing anything by making this switch.
Cc: stable@vger.kernel.org # 5.10+
Link: https://lore.kernel.org/r/b0e4aaef-7088-56ce-244c-976edeac0e66@kernel.dk
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2023-06-12 03:14:09 +00:00
|
|
|
/*
|
|
|
|
* This is really an uninterruptible wait, as it has to be
|
|
|
|
* complete. But it's also run from a kworker, which doesn't
|
|
|
|
* take signals, so it's fine to make it interruptible. This
|
|
|
|
* avoids scenarios where we knowingly can wait much longer
|
|
|
|
* on completions, for example if someone does a SIGSTOP on
|
|
|
|
* a task that needs to finish task_work to make this loop
|
|
|
|
* complete. That's a synthetic situation that should not
|
|
|
|
* cause a stuck task backtrace, and hence a potential panic
|
|
|
|
* on stuck tasks if that is enabled.
|
|
|
|
*/
|
|
|
|
} while (!wait_for_completion_interruptible_timeout(&ctx->ref_comp, interval));
|
2021-03-06 11:02:13 +00:00
|
|
|
|
2021-04-14 12:38:34 +00:00
|
|
|
init_completion(&exit.completion);
|
|
|
|
init_task_work(&exit.task_work, io_tctx_exit_cb);
|
|
|
|
exit.ctx = ctx;
|
2023-12-03 15:37:53 +00:00
|
|
|
|
2021-03-06 11:02:13 +00:00
|
|
|
mutex_lock(&ctx->uring_lock);
|
|
|
|
while (!list_empty(&ctx->tctx_list)) {
|
2021-03-06 11:02:16 +00:00
|
|
|
WARN_ON_ONCE(time_after(jiffies, timeout));
|
|
|
|
|
2021-03-06 11:02:13 +00:00
|
|
|
node = list_first_entry(&ctx->tctx_list, struct io_tctx_node,
|
|
|
|
ctx_node);
|
2021-04-14 12:38:34 +00:00
|
|
|
/* don't spin on a single task if cancellation failed */
|
|
|
|
list_rotate_left(&ctx->tctx_list);
|
2021-03-06 11:02:13 +00:00
|
|
|
ret = task_work_add(node->task, &exit.task_work, TWA_SIGNAL);
|
|
|
|
if (WARN_ON_ONCE(ret))
|
|
|
|
continue;
|
|
|
|
|
|
|
|
mutex_unlock(&ctx->uring_lock);
|
io_uring: wait interruptibly for request completions on exit
WHen the ring exits, cleanup is done and the final cancelation and
waiting on completions is done by io_ring_exit_work. That function is
invoked by kworker, which doesn't take any signals. Because of that, it
doesn't really matter if we wait for completions in TASK_INTERRUPTIBLE
or TASK_UNINTERRUPTIBLE state. However, it does matter to the hung task
detection checker!
Normally we expect cancelations and completions to happen rather
quickly. Some test cases, however, will exit the ring and park the
owning task stopped (eg via SIGSTOP). If the owning task needs to run
task_work to complete requests, then io_ring_exit_work won't make any
progress until the task is runnable again. Hence io_ring_exit_work can
trigger the hung task detection, which is particularly problematic if
panic-on-hung-task is enabled.
As the ring exit doesn't take signals to begin with, have it wait
interruptibly rather than uninterruptibly. io_uring has a separate
stuck-exit warning that triggers independently anyway, so we're not
really missing anything by making this switch.
Cc: stable@vger.kernel.org # 5.10+
Link: https://lore.kernel.org/r/b0e4aaef-7088-56ce-244c-976edeac0e66@kernel.dk
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2023-06-12 03:14:09 +00:00
|
|
|
/*
|
|
|
|
* See comment above for
|
|
|
|
* wait_for_completion_interruptible_timeout() on why this
|
|
|
|
* wait is marked as interruptible.
|
|
|
|
*/
|
|
|
|
wait_for_completion_interruptible(&exit.completion);
|
2021-03-06 11:02:13 +00:00
|
|
|
mutex_lock(&ctx->uring_lock);
|
|
|
|
}
|
|
|
|
mutex_unlock(&ctx->uring_lock);
|
2021-08-10 21:18:27 +00:00
|
|
|
spin_lock(&ctx->completion_lock);
|
|
|
|
spin_unlock(&ctx->completion_lock);
|
2021-03-06 11:02:13 +00:00
|
|
|
|
2023-04-06 13:20:08 +00:00
|
|
|
/* pairs with RCU read section in io_req_local_work_add() */
|
|
|
|
if (ctx->flags & IORING_SETUP_DEFER_TASKRUN)
|
|
|
|
synchronize_rcu();
|
|
|
|
|
2020-04-10 00:14:00 +00:00
|
|
|
io_ring_ctx_free(ctx);
|
|
|
|
}
|
|
|
|
|
2021-10-04 19:02:54 +00:00
|
|
|
static __cold void io_ring_ctx_wait_and_kill(struct io_ring_ctx *ctx)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
{
|
2021-03-08 14:16:16 +00:00
|
|
|
unsigned long index;
|
|
|
|
struct creds *creds;
|
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
mutex_lock(&ctx->uring_lock);
|
|
|
|
percpu_ref_kill(&ctx->refs);
|
2021-03-08 14:16:16 +00:00
|
|
|
xa_for_each(&ctx->personalities, index, creds)
|
|
|
|
io_unregister_personality(ctx, index);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
mutex_unlock(&ctx->uring_lock);
|
|
|
|
|
2023-06-28 17:06:05 +00:00
|
|
|
flush_delayed_work(&ctx->fallback_work);
|
|
|
|
|
2020-04-10 00:14:00 +00:00
|
|
|
INIT_WORK(&ctx->exit_work, io_ring_exit_work);
|
2020-08-19 17:10:51 +00:00
|
|
|
/*
|
|
|
|
* Use system_unbound_wq to avoid spawning tons of event kworkers
|
|
|
|
* if we're exiting a ton of rings at the same time. It just adds
|
|
|
|
* noise and overhead, there's no discernable change in runtime
|
|
|
|
* over using system_wq.
|
|
|
|
*/
|
2024-04-01 21:16:19 +00:00
|
|
|
queue_work(iou_wq, &ctx->exit_work);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static int io_uring_release(struct inode *inode, struct file *file)
|
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx = file->private_data;
|
|
|
|
|
|
|
|
file->private_data = NULL;
|
|
|
|
io_ring_ctx_wait_and_kill(ctx);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2020-11-06 13:00:26 +00:00
|
|
|
struct io_task_cancel {
|
|
|
|
struct task_struct *task;
|
2021-05-16 21:58:04 +00:00
|
|
|
bool all;
|
2020-11-06 13:00:26 +00:00
|
|
|
};
|
2020-08-12 23:33:30 +00:00
|
|
|
|
2020-11-06 13:00:26 +00:00
|
|
|
static bool io_cancel_task_cb(struct io_wq_work *work, void *data)
|
2020-08-16 15:23:05 +00:00
|
|
|
{
|
2020-11-05 22:31:37 +00:00
|
|
|
struct io_kiocb *req = container_of(work, struct io_kiocb, work);
|
2020-11-06 13:00:26 +00:00
|
|
|
struct io_task_cancel *cancel = data;
|
2020-11-05 22:31:37 +00:00
|
|
|
|
2021-11-26 14:38:15 +00:00
|
|
|
return io_match_task_safe(req, cancel->task, cancel->all);
|
2020-08-16 15:23:05 +00:00
|
|
|
}
|
|
|
|
|
2021-10-04 19:02:54 +00:00
|
|
|
static __cold bool io_cancel_defer_files(struct io_ring_ctx *ctx,
|
|
|
|
struct task_struct *task,
|
|
|
|
bool cancel_all)
|
2020-09-05 21:45:14 +00:00
|
|
|
{
|
2021-03-11 23:29:35 +00:00
|
|
|
struct io_defer_entry *de;
|
2020-09-05 21:45:14 +00:00
|
|
|
LIST_HEAD(list);
|
|
|
|
|
2021-08-10 21:18:27 +00:00
|
|
|
spin_lock(&ctx->completion_lock);
|
2020-09-05 21:45:14 +00:00
|
|
|
list_for_each_entry_reverse(de, &ctx->defer_list, list) {
|
2021-11-26 14:38:15 +00:00
|
|
|
if (io_match_task_safe(de->req, task, cancel_all)) {
|
2020-09-05 21:45:14 +00:00
|
|
|
list_cut_position(&list, &ctx->defer_list, &de->list);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
2021-08-10 21:18:27 +00:00
|
|
|
spin_unlock(&ctx->completion_lock);
|
2021-03-11 23:29:35 +00:00
|
|
|
if (list_empty(&list))
|
|
|
|
return false;
|
2020-09-05 21:45:14 +00:00
|
|
|
|
|
|
|
while (!list_empty(&list)) {
|
|
|
|
de = list_first_entry(&list, struct io_defer_entry, list);
|
|
|
|
list_del_init(&de->list);
|
2022-11-23 11:33:37 +00:00
|
|
|
io_req_task_queue_fail(de->req, -ECANCELED);
|
2020-09-05 21:45:14 +00:00
|
|
|
kfree(de);
|
|
|
|
}
|
2021-03-11 23:29:35 +00:00
|
|
|
return true;
|
2020-09-05 21:45:14 +00:00
|
|
|
}
|
|
|
|
|
2021-10-04 19:02:54 +00:00
|
|
|
static __cold bool io_uring_try_cancel_iowq(struct io_ring_ctx *ctx)
|
2021-03-06 11:02:17 +00:00
|
|
|
{
|
|
|
|
struct io_tctx_node *node;
|
|
|
|
enum io_wq_cancel cret;
|
|
|
|
bool ret = false;
|
|
|
|
|
|
|
|
mutex_lock(&ctx->uring_lock);
|
|
|
|
list_for_each_entry(node, &ctx->tctx_list, ctx_node) {
|
|
|
|
struct io_uring_task *tctx = node->task->io_uring;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* io_wq will stay alive while we hold uring_lock, because it's
|
|
|
|
* killed after ctx nodes, which requires to take the lock.
|
|
|
|
*/
|
|
|
|
if (!tctx || !tctx->io_wq)
|
|
|
|
continue;
|
|
|
|
cret = io_wq_cancel_cb(tctx->io_wq, io_cancel_ctx_cb, ctx, true);
|
|
|
|
ret |= (cret != IO_WQ_CANCEL_NOTFOUND);
|
|
|
|
}
|
|
|
|
mutex_unlock(&ctx->uring_lock);
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2022-06-20 00:25:52 +00:00
|
|
|
static __cold bool io_uring_try_cancel_requests(struct io_ring_ctx *ctx,
|
2021-10-04 19:02:54 +00:00
|
|
|
struct task_struct *task,
|
|
|
|
bool cancel_all)
|
2021-02-04 13:51:56 +00:00
|
|
|
{
|
2021-05-16 21:58:04 +00:00
|
|
|
struct io_task_cancel cancel = { .task = task, .all = cancel_all, };
|
2021-03-06 11:02:17 +00:00
|
|
|
struct io_uring_task *tctx = task ? task->io_uring : NULL;
|
2022-06-20 00:25:52 +00:00
|
|
|
enum io_wq_cancel cret;
|
|
|
|
bool ret = false;
|
2021-02-04 13:51:56 +00:00
|
|
|
|
2023-04-06 13:20:14 +00:00
|
|
|
/* set it so io_req_local_work_add() would wake us up */
|
|
|
|
if (ctx->flags & IORING_SETUP_DEFER_TASKRUN) {
|
|
|
|
atomic_set(&ctx->cq_wait_nr, 1);
|
|
|
|
smp_mb();
|
|
|
|
}
|
|
|
|
|
2022-03-21 22:02:20 +00:00
|
|
|
/* failed during ring init, it couldn't have issued any requests */
|
|
|
|
if (!ctx->rings)
|
2022-06-20 00:25:52 +00:00
|
|
|
return false;
|
2022-03-21 22:02:20 +00:00
|
|
|
|
2022-06-20 00:25:52 +00:00
|
|
|
if (!task) {
|
|
|
|
ret |= io_uring_try_cancel_iowq(ctx);
|
|
|
|
} else if (tctx && tctx->io_wq) {
|
|
|
|
/*
|
|
|
|
* Cancels requests of all rings, not only @ctx, but
|
|
|
|
* it's fine as the task is in exit/exec.
|
|
|
|
*/
|
|
|
|
cret = io_wq_cancel_cb(tctx->io_wq, io_cancel_task_cb,
|
|
|
|
&cancel, true);
|
|
|
|
ret |= (cret != IO_WQ_CANCEL_NOTFOUND);
|
|
|
|
}
|
2021-02-04 13:51:56 +00:00
|
|
|
|
2022-06-20 00:25:52 +00:00
|
|
|
/* SQPOLL thread does its own polling */
|
|
|
|
if ((!(ctx->flags & IORING_SETUP_SQPOLL) && cancel_all) ||
|
|
|
|
(ctx->sq_data && ctx->sq_data->thread == current)) {
|
|
|
|
while (!wq_list_empty(&ctx->iopoll_list)) {
|
|
|
|
io_iopoll_try_reap_events(ctx);
|
|
|
|
ret = true;
|
2023-01-27 16:28:13 +00:00
|
|
|
cond_resched();
|
2021-02-04 13:51:56 +00:00
|
|
|
}
|
|
|
|
}
|
2022-06-20 00:25:52 +00:00
|
|
|
|
2023-01-05 11:22:23 +00:00
|
|
|
if ((ctx->flags & IORING_SETUP_DEFER_TASKRUN) &&
|
|
|
|
io_allowed_defer_tw_run(ctx))
|
2024-01-31 17:50:08 +00:00
|
|
|
ret |= io_run_local_work(ctx, INT_MAX) > 0;
|
2022-06-20 00:25:52 +00:00
|
|
|
ret |= io_cancel_defer_files(ctx, task, cancel_all);
|
|
|
|
mutex_lock(&ctx->uring_lock);
|
|
|
|
ret |= io_poll_remove_all(ctx, task, cancel_all);
|
2023-07-10 22:14:37 +00:00
|
|
|
ret |= io_waitid_remove_all(ctx, task, cancel_all);
|
io_uring: add support for futex wake and wait
Add support for FUTEX_WAKE/WAIT primitives.
IORING_OP_FUTEX_WAKE is mix of FUTEX_WAKE and FUTEX_WAKE_BITSET, as
it does support passing in a bitset.
Similary, IORING_OP_FUTEX_WAIT is a mix of FUTEX_WAIT and
FUTEX_WAIT_BITSET.
For both of them, they are using the futex2 interface.
FUTEX_WAKE is straight forward, as those can always be done directly from
the io_uring submission without needing async handling. For FUTEX_WAIT,
things are a bit more complicated. If the futex isn't ready, then we
rely on a callback via futex_queue->wake() when someone wakes up the
futex. From that calback, we queue up task_work with the original task,
which will post a CQE and wake it, if necessary.
Cancelations are supported, both from the application point-of-view,
but also to be able to cancel pending waits if the ring exits before
all events have occurred. The return value of futex_unqueue() is used
to gate who wins the potential race between cancelation and futex
wakeups. Whomever gets a 'ret == 1' return from that claims ownership
of the io_uring futex request.
This is just the barebones wait/wake support. PI or REQUEUE support is
not added at this point, unclear if we might look into that later.
Likewise, explicit timeouts are not supported either. It is expected
that users that need timeouts would do so via the usual io_uring
mechanism to do that using linked timeouts.
The SQE format is as follows:
`addr` Address of futex
`fd` futex2(2) FUTEX2_* flags
`futex_flags` io_uring specific command flags. None valid now.
`addr2` Value of futex
`addr3` Mask to wake/wait
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2023-06-08 17:57:40 +00:00
|
|
|
ret |= io_futex_remove_all(ctx, task, cancel_all);
|
2023-09-28 12:43:25 +00:00
|
|
|
ret |= io_uring_try_cancel_uring_cmd(ctx, task, cancel_all);
|
2022-06-20 00:25:52 +00:00
|
|
|
mutex_unlock(&ctx->uring_lock);
|
|
|
|
ret |= io_kill_timeouts(ctx, task, cancel_all);
|
|
|
|
if (task)
|
2022-08-30 12:50:10 +00:00
|
|
|
ret |= io_run_task_work() > 0;
|
2024-03-18 16:41:25 +00:00
|
|
|
else
|
|
|
|
ret |= flush_delayed_work(&ctx->fallback_work);
|
2022-06-20 00:25:52 +00:00
|
|
|
return ret;
|
2021-02-04 13:51:56 +00:00
|
|
|
}
|
|
|
|
|
2021-04-11 00:46:27 +00:00
|
|
|
static s64 tctx_inflight(struct io_uring_task *tctx, bool tracked)
|
io_uring: cancel sqpoll via task_work
1) The first problem is io_uring_cancel_sqpoll() ->
io_uring_cancel_task_requests() basically doing park(); park(); and so
hanging.
2) Another one is more subtle, when the master task is doing cancellations,
but SQPOLL task submits in-between the end of the cancellation but
before finish() requests taking a ref to the ctx, and so eternally
locking it up.
3) Yet another is a dying SQPOLL task doing io_uring_cancel_sqpoll() and
same io_uring_cancel_sqpoll() from the owner task, they race for
tctx->wait events. And there probably more of them.
Instead do SQPOLL cancellations from within SQPOLL task context via
task_work, see io_sqpoll_cancel_sync(). With that we don't need temporal
park()/unpark() during cancellation, which is ugly, subtle and anyway
doesn't allow to do io_run_task_work() properly.
io_uring_cancel_sqpoll() is called only from SQPOLL task context and
under sqd locking, so all parking is removed from there. And so,
io_sq_thread_[un]park() and io_sq_thread_stop() are not used now by
SQPOLL task, and that spare us from some headache.
Also remove ctx->sqd_list early to avoid 2). And kill tctx->sqpoll,
which is not used anymore.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-03-11 23:29:38 +00:00
|
|
|
{
|
2021-04-11 00:46:27 +00:00
|
|
|
if (tracked)
|
2022-06-02 05:57:02 +00:00
|
|
|
return atomic_read(&tctx->inflight_tracked);
|
io_uring: cancel sqpoll via task_work
1) The first problem is io_uring_cancel_sqpoll() ->
io_uring_cancel_task_requests() basically doing park(); park(); and so
hanging.
2) Another one is more subtle, when the master task is doing cancellations,
but SQPOLL task submits in-between the end of the cancellation but
before finish() requests taking a ref to the ctx, and so eternally
locking it up.
3) Yet another is a dying SQPOLL task doing io_uring_cancel_sqpoll() and
same io_uring_cancel_sqpoll() from the owner task, they race for
tctx->wait events. And there probably more of them.
Instead do SQPOLL cancellations from within SQPOLL task context via
task_work, see io_sqpoll_cancel_sync(). With that we don't need temporal
park()/unpark() during cancellation, which is ugly, subtle and anyway
doesn't allow to do io_run_task_work() properly.
io_uring_cancel_sqpoll() is called only from SQPOLL task context and
under sqd locking, so all parking is removed from there. And so,
io_sq_thread_[un]park() and io_sq_thread_stop() are not used now by
SQPOLL task, and that spare us from some headache.
Also remove ctx->sqd_list early to avoid 2). And kill tctx->sqpoll,
which is not used anymore.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-03-11 23:29:38 +00:00
|
|
|
return percpu_counter_sum(&tctx->inflight);
|
|
|
|
}
|
|
|
|
|
2021-06-14 01:36:23 +00:00
|
|
|
/*
|
|
|
|
* Find any io_uring ctx that this task has registered or done IO on, and cancel
|
2021-12-09 15:54:29 +00:00
|
|
|
* requests. @sqd should be not-null IFF it's an SQPOLL thread cancellation.
|
2021-06-14 01:36:23 +00:00
|
|
|
*/
|
2022-05-25 15:13:39 +00:00
|
|
|
__cold void io_uring_cancel_generic(bool cancel_all, struct io_sq_data *sqd)
|
2021-02-07 22:34:26 +00:00
|
|
|
{
|
io_uring: cancel sqpoll via task_work
1) The first problem is io_uring_cancel_sqpoll() ->
io_uring_cancel_task_requests() basically doing park(); park(); and so
hanging.
2) Another one is more subtle, when the master task is doing cancellations,
but SQPOLL task submits in-between the end of the cancellation but
before finish() requests taking a ref to the ctx, and so eternally
locking it up.
3) Yet another is a dying SQPOLL task doing io_uring_cancel_sqpoll() and
same io_uring_cancel_sqpoll() from the owner task, they race for
tctx->wait events. And there probably more of them.
Instead do SQPOLL cancellations from within SQPOLL task context via
task_work, see io_sqpoll_cancel_sync(). With that we don't need temporal
park()/unpark() during cancellation, which is ugly, subtle and anyway
doesn't allow to do io_run_task_work() properly.
io_uring_cancel_sqpoll() is called only from SQPOLL task context and
under sqd locking, so all parking is removed from there. And so,
io_sq_thread_[un]park() and io_sq_thread_stop() are not used now by
SQPOLL task, and that spare us from some headache.
Also remove ctx->sqd_list early to avoid 2). And kill tctx->sqpoll,
which is not used anymore.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-03-11 23:29:38 +00:00
|
|
|
struct io_uring_task *tctx = current->io_uring;
|
2021-04-18 13:52:09 +00:00
|
|
|
struct io_ring_ctx *ctx;
|
2023-04-06 13:20:14 +00:00
|
|
|
struct io_tctx_node *node;
|
|
|
|
unsigned long index;
|
2021-02-07 22:34:26 +00:00
|
|
|
s64 inflight;
|
|
|
|
DEFINE_WAIT(wait);
|
2020-10-30 15:37:30 +00:00
|
|
|
|
2021-06-14 01:36:23 +00:00
|
|
|
WARN_ON_ONCE(sqd && sqd->thread != current);
|
|
|
|
|
2021-04-27 12:51:49 +00:00
|
|
|
if (!current->io_uring)
|
|
|
|
return;
|
2021-05-23 14:48:39 +00:00
|
|
|
if (tctx->io_wq)
|
|
|
|
io_wq_exit_start(tctx->io_wq);
|
|
|
|
|
2023-02-17 15:27:23 +00:00
|
|
|
atomic_inc(&tctx->in_cancel);
|
2021-02-07 22:34:26 +00:00
|
|
|
do {
|
2022-06-20 00:25:52 +00:00
|
|
|
bool loop = false;
|
|
|
|
|
2021-08-09 12:04:20 +00:00
|
|
|
io_uring_drop_tctx_refs(current);
|
io_uring: tighten task exit cancellations
io_uring_cancel_generic() should retry if any state changes like a
request is completed, however in case of a task exit it only goes for
another loop and avoids schedule() if any tracked (i.e. REQ_F_INFLIGHT)
request got completed.
Let's assume we have a non-tracked request executing in iowq and a
tracked request linked to it. Let's also assume
io_uring_cancel_generic() fails to find and cancel the request, i.e.
via io_run_local_work(), which may happen as io-wq has gaps.
Next, the request logically completes, io-wq still hold a ref but queues
it for completion via tw, which happens in
io_uring_try_cancel_requests(). After, right before prepare_to_wait()
io-wq puts the request, grabs the linked one and tries executes it, e.g.
arms polling. Finally the cancellation loop calls prepare_to_wait(),
there are no tw to run, no tracked request was completed, so the
tctx_inflight() check passes and the task is put to indefinite sleep.
Cc: stable@vger.kernel.org
Fixes: 3f48cf18f886c ("io_uring: unify files and task cancel")
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/acac7311f4e02ce3c43293f8f1fda9c705d158f1.1721819383.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2024-07-24 11:16:16 +00:00
|
|
|
if (!tctx_inflight(tctx, !cancel_all))
|
|
|
|
break;
|
|
|
|
|
2021-02-07 22:34:26 +00:00
|
|
|
/* read completions before cancelations */
|
io_uring: tighten task exit cancellations
io_uring_cancel_generic() should retry if any state changes like a
request is completed, however in case of a task exit it only goes for
another loop and avoids schedule() if any tracked (i.e. REQ_F_INFLIGHT)
request got completed.
Let's assume we have a non-tracked request executing in iowq and a
tracked request linked to it. Let's also assume
io_uring_cancel_generic() fails to find and cancel the request, i.e.
via io_run_local_work(), which may happen as io-wq has gaps.
Next, the request logically completes, io-wq still hold a ref but queues
it for completion via tw, which happens in
io_uring_try_cancel_requests(). After, right before prepare_to_wait()
io-wq puts the request, grabs the linked one and tries executes it, e.g.
arms polling. Finally the cancellation loop calls prepare_to_wait(),
there are no tw to run, no tracked request was completed, so the
tctx_inflight() check passes and the task is put to indefinite sleep.
Cc: stable@vger.kernel.org
Fixes: 3f48cf18f886c ("io_uring: unify files and task cancel")
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/acac7311f4e02ce3c43293f8f1fda9c705d158f1.1721819383.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2024-07-24 11:16:16 +00:00
|
|
|
inflight = tctx_inflight(tctx, false);
|
2021-02-07 22:34:26 +00:00
|
|
|
if (!inflight)
|
|
|
|
break;
|
2020-10-30 15:37:30 +00:00
|
|
|
|
2021-06-14 01:36:23 +00:00
|
|
|
if (!sqd) {
|
|
|
|
xa_for_each(&tctx->xa, index, node) {
|
|
|
|
/* sqpoll task will cancel all its requests */
|
|
|
|
if (node->ctx->sq_data)
|
|
|
|
continue;
|
2022-06-20 00:25:52 +00:00
|
|
|
loop |= io_uring_try_cancel_requests(node->ctx,
|
|
|
|
current, cancel_all);
|
2021-06-14 01:36:23 +00:00
|
|
|
}
|
|
|
|
} else {
|
|
|
|
list_for_each_entry(ctx, &sqd->ctx_list, sqd_list)
|
2022-06-20 00:25:52 +00:00
|
|
|
loop |= io_uring_try_cancel_requests(ctx,
|
|
|
|
current,
|
|
|
|
cancel_all);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (loop) {
|
|
|
|
cond_resched();
|
|
|
|
continue;
|
2021-06-14 01:36:23 +00:00
|
|
|
}
|
2021-05-23 14:48:39 +00:00
|
|
|
|
2021-12-09 15:54:29 +00:00
|
|
|
prepare_to_wait(&tctx->wait, &wait, TASK_INTERRUPTIBLE);
|
|
|
|
io_run_task_work();
|
2021-08-09 12:04:20 +00:00
|
|
|
io_uring_drop_tctx_refs(current);
|
2023-04-06 13:20:14 +00:00
|
|
|
xa_for_each(&tctx->xa, index, node) {
|
|
|
|
if (!llist_empty(&node->ctx->work_llist)) {
|
|
|
|
WARN_ON_ONCE(node->ctx->submitter_task &&
|
|
|
|
node->ctx->submitter_task != current);
|
|
|
|
goto end_wait;
|
|
|
|
}
|
|
|
|
}
|
2020-09-13 19:09:39 +00:00
|
|
|
/*
|
2021-01-26 15:28:26 +00:00
|
|
|
* If we've seen completions, retry without waiting. This
|
|
|
|
* avoids a race where a completion comes in before we did
|
|
|
|
* prepare_to_wait().
|
2020-09-13 19:09:39 +00:00
|
|
|
*/
|
2021-05-16 21:58:04 +00:00
|
|
|
if (inflight == tctx_inflight(tctx, !cancel_all))
|
2021-01-26 15:28:26 +00:00
|
|
|
schedule();
|
2023-04-06 13:20:14 +00:00
|
|
|
end_wait:
|
2020-12-20 13:21:44 +00:00
|
|
|
finish_wait(&tctx->wait, &wait);
|
2020-10-15 22:24:45 +00:00
|
|
|
} while (1);
|
2021-01-04 20:43:29 +00:00
|
|
|
|
2021-02-27 11:16:46 +00:00
|
|
|
io_uring_clean_tctx(tctx);
|
2021-05-16 21:58:04 +00:00
|
|
|
if (cancel_all) {
|
2022-01-09 00:53:22 +00:00
|
|
|
/*
|
|
|
|
* We shouldn't run task_works after cancel, so just leave
|
2023-02-17 15:27:23 +00:00
|
|
|
* ->in_cancel set for normal exit.
|
2022-01-09 00:53:22 +00:00
|
|
|
*/
|
2023-02-17 15:27:23 +00:00
|
|
|
atomic_dec(&tctx->in_cancel);
|
2021-04-11 00:46:27 +00:00
|
|
|
/* for exec all current's requests should be gone, kill tctx */
|
|
|
|
__io_uring_free(current);
|
|
|
|
}
|
2020-06-15 07:24:04 +00:00
|
|
|
}
|
|
|
|
|
2021-08-12 04:14:35 +00:00
|
|
|
void __io_uring_cancel(bool cancel_all)
|
2021-06-14 01:36:23 +00:00
|
|
|
{
|
2021-08-12 04:14:35 +00:00
|
|
|
io_uring_cancel_generic(cancel_all, NULL);
|
2021-06-14 01:36:23 +00:00
|
|
|
}
|
|
|
|
|
2022-03-22 14:07:56 +00:00
|
|
|
static int io_validate_ext_arg(unsigned flags, const void __user *argp, size_t argsz)
|
|
|
|
{
|
|
|
|
if (flags & IORING_ENTER_EXT_ARG) {
|
|
|
|
struct io_uring_getevents_arg arg;
|
|
|
|
|
|
|
|
if (argsz != sizeof(arg))
|
|
|
|
return -EINVAL;
|
|
|
|
if (copy_from_user(&arg, argp, sizeof(arg)))
|
|
|
|
return -EFAULT;
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2024-08-15 22:13:32 +00:00
|
|
|
static int io_get_ext_arg(unsigned flags, const void __user *argp,
|
|
|
|
struct ext_arg *ext_arg)
|
2020-11-03 02:54:37 +00:00
|
|
|
{
|
|
|
|
struct io_uring_getevents_arg arg;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If EXT_ARG isn't set, then we have no timespec and the argp pointer
|
|
|
|
* is just a pointer to the sigset_t.
|
|
|
|
*/
|
|
|
|
if (!(flags & IORING_ENTER_EXT_ARG)) {
|
2024-08-15 22:13:32 +00:00
|
|
|
ext_arg->sig = (const sigset_t __user *) argp;
|
|
|
|
ext_arg->ts = NULL;
|
2020-11-03 02:54:37 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* EXT_ARG is set - ensure we agree on the size of it and copy in our
|
|
|
|
* timespec and sigset_t pointers if good.
|
|
|
|
*/
|
2024-08-15 22:13:32 +00:00
|
|
|
if (ext_arg->argsz != sizeof(arg))
|
2020-11-03 02:54:37 +00:00
|
|
|
return -EINVAL;
|
|
|
|
if (copy_from_user(&arg, argp, sizeof(arg)))
|
|
|
|
return -EFAULT;
|
2024-01-04 17:46:30 +00:00
|
|
|
ext_arg->min_time = arg.min_wait_usec * NSEC_PER_USEC;
|
2024-08-15 22:13:32 +00:00
|
|
|
ext_arg->sig = u64_to_user_ptr(arg.sigmask);
|
|
|
|
ext_arg->argsz = arg.sigmask_sz;
|
|
|
|
ext_arg->ts = u64_to_user_ptr(arg.ts);
|
2020-11-03 02:54:37 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
SYSCALL_DEFINE6(io_uring_enter, unsigned int, fd, u32, to_submit,
|
2020-11-03 02:54:37 +00:00
|
|
|
u32, min_complete, u32, flags, const void __user *, argp,
|
|
|
|
size_t, argsz)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx;
|
2023-11-28 17:29:58 +00:00
|
|
|
struct file *file;
|
2021-03-19 17:22:30 +00:00
|
|
|
long ret;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
2021-03-19 17:22:30 +00:00
|
|
|
if (unlikely(flags & ~(IORING_ENTER_GETEVENTS | IORING_ENTER_SQ_WAKEUP |
|
io_uring: add support for registering ring file descriptors
Lots of workloads use multiple threads, in which case the file table is
shared between them. This makes getting and putting the ring file
descriptor for each io_uring_enter(2) system call more expensive, as it
involves an atomic get and put for each call.
Similarly to how we allow registering normal file descriptors to avoid
this overhead, add support for an io_uring_register(2) API that allows
to register the ring fds themselves:
1) IORING_REGISTER_RING_FDS - takes an array of io_uring_rsrc_update
structs, and registers them with the task.
2) IORING_UNREGISTER_RING_FDS - takes an array of io_uring_src_update
structs, and unregisters them.
When a ring fd is registered, it is internally represented by an offset.
This offset is returned to the application, and the application then
uses this offset and sets IORING_ENTER_REGISTERED_RING for the
io_uring_enter(2) system call. This works just like using a registered
file descriptor, rather than a real one, in an SQE, where
IOSQE_FIXED_FILE gets set to tell io_uring that we're using an internal
offset/descriptor rather than a real file descriptor.
In initial testing, this provides a nice bump in performance for
threaded applications in real world cases where the batch count (eg
number of requests submitted per io_uring_enter(2) invocation) is low.
In a microbenchmark, submitting NOP requests, we see the following
increases in performance:
Requests per syscall Baseline Registered Increase
----------------------------------------------------------------
1 ~7030K ~8080K +15%
2 ~13120K ~14800K +13%
4 ~22740K ~25300K +11%
Co-developed-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-03-04 15:22:22 +00:00
|
|
|
IORING_ENTER_SQ_WAIT | IORING_ENTER_EXT_ARG |
|
2024-08-07 14:18:13 +00:00
|
|
|
IORING_ENTER_REGISTERED_RING |
|
|
|
|
IORING_ENTER_ABS_TIMER)))
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
return -EINVAL;
|
|
|
|
|
io_uring: add support for registering ring file descriptors
Lots of workloads use multiple threads, in which case the file table is
shared between them. This makes getting and putting the ring file
descriptor for each io_uring_enter(2) system call more expensive, as it
involves an atomic get and put for each call.
Similarly to how we allow registering normal file descriptors to avoid
this overhead, add support for an io_uring_register(2) API that allows
to register the ring fds themselves:
1) IORING_REGISTER_RING_FDS - takes an array of io_uring_rsrc_update
structs, and registers them with the task.
2) IORING_UNREGISTER_RING_FDS - takes an array of io_uring_src_update
structs, and unregisters them.
When a ring fd is registered, it is internally represented by an offset.
This offset is returned to the application, and the application then
uses this offset and sets IORING_ENTER_REGISTERED_RING for the
io_uring_enter(2) system call. This works just like using a registered
file descriptor, rather than a real one, in an SQE, where
IOSQE_FIXED_FILE gets set to tell io_uring that we're using an internal
offset/descriptor rather than a real file descriptor.
In initial testing, this provides a nice bump in performance for
threaded applications in real world cases where the batch count (eg
number of requests submitted per io_uring_enter(2) invocation) is low.
In a microbenchmark, submitting NOP requests, we see the following
increases in performance:
Requests per syscall Baseline Registered Increase
----------------------------------------------------------------
1 ~7030K ~8080K +15%
2 ~13120K ~14800K +13%
4 ~22740K ~25300K +11%
Co-developed-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-03-04 15:22:22 +00:00
|
|
|
/*
|
|
|
|
* Ring fd has been registered via IORING_REGISTER_RING_FDS, we
|
|
|
|
* need only dereference our task private array to find it.
|
|
|
|
*/
|
|
|
|
if (flags & IORING_ENTER_REGISTERED_RING) {
|
|
|
|
struct io_uring_task *tctx = current->io_uring;
|
|
|
|
|
2022-06-25 10:53:01 +00:00
|
|
|
if (unlikely(!tctx || fd >= IO_RINGFD_REG_MAX))
|
io_uring: add support for registering ring file descriptors
Lots of workloads use multiple threads, in which case the file table is
shared between them. This makes getting and putting the ring file
descriptor for each io_uring_enter(2) system call more expensive, as it
involves an atomic get and put for each call.
Similarly to how we allow registering normal file descriptors to avoid
this overhead, add support for an io_uring_register(2) API that allows
to register the ring fds themselves:
1) IORING_REGISTER_RING_FDS - takes an array of io_uring_rsrc_update
structs, and registers them with the task.
2) IORING_UNREGISTER_RING_FDS - takes an array of io_uring_src_update
structs, and unregisters them.
When a ring fd is registered, it is internally represented by an offset.
This offset is returned to the application, and the application then
uses this offset and sets IORING_ENTER_REGISTERED_RING for the
io_uring_enter(2) system call. This works just like using a registered
file descriptor, rather than a real one, in an SQE, where
IOSQE_FIXED_FILE gets set to tell io_uring that we're using an internal
offset/descriptor rather than a real file descriptor.
In initial testing, this provides a nice bump in performance for
threaded applications in real world cases where the batch count (eg
number of requests submitted per io_uring_enter(2) invocation) is low.
In a microbenchmark, submitting NOP requests, we see the following
increases in performance:
Requests per syscall Baseline Registered Increase
----------------------------------------------------------------
1 ~7030K ~8080K +15%
2 ~13120K ~14800K +13%
4 ~22740K ~25300K +11%
Co-developed-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-03-04 15:22:22 +00:00
|
|
|
return -EINVAL;
|
|
|
|
fd = array_index_nospec(fd, IO_RINGFD_REG_MAX);
|
2023-11-28 17:29:58 +00:00
|
|
|
file = tctx->registered_rings[fd];
|
|
|
|
if (unlikely(!file))
|
2022-06-25 10:53:01 +00:00
|
|
|
return -EBADF;
|
io_uring: add support for registering ring file descriptors
Lots of workloads use multiple threads, in which case the file table is
shared between them. This makes getting and putting the ring file
descriptor for each io_uring_enter(2) system call more expensive, as it
involves an atomic get and put for each call.
Similarly to how we allow registering normal file descriptors to avoid
this overhead, add support for an io_uring_register(2) API that allows
to register the ring fds themselves:
1) IORING_REGISTER_RING_FDS - takes an array of io_uring_rsrc_update
structs, and registers them with the task.
2) IORING_UNREGISTER_RING_FDS - takes an array of io_uring_src_update
structs, and unregisters them.
When a ring fd is registered, it is internally represented by an offset.
This offset is returned to the application, and the application then
uses this offset and sets IORING_ENTER_REGISTERED_RING for the
io_uring_enter(2) system call. This works just like using a registered
file descriptor, rather than a real one, in an SQE, where
IOSQE_FIXED_FILE gets set to tell io_uring that we're using an internal
offset/descriptor rather than a real file descriptor.
In initial testing, this provides a nice bump in performance for
threaded applications in real world cases where the batch count (eg
number of requests submitted per io_uring_enter(2) invocation) is low.
In a microbenchmark, submitting NOP requests, we see the following
increases in performance:
Requests per syscall Baseline Registered Increase
----------------------------------------------------------------
1 ~7030K ~8080K +15%
2 ~13120K ~14800K +13%
4 ~22740K ~25300K +11%
Co-developed-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-03-04 15:22:22 +00:00
|
|
|
} else {
|
2023-11-28 17:29:58 +00:00
|
|
|
file = fget(fd);
|
|
|
|
if (unlikely(!file))
|
2022-06-25 10:53:01 +00:00
|
|
|
return -EBADF;
|
|
|
|
ret = -EOPNOTSUPP;
|
2023-11-28 17:29:58 +00:00
|
|
|
if (unlikely(!io_is_uring_fops(file)))
|
2022-06-25 10:53:02 +00:00
|
|
|
goto out;
|
io_uring: add support for registering ring file descriptors
Lots of workloads use multiple threads, in which case the file table is
shared between them. This makes getting and putting the ring file
descriptor for each io_uring_enter(2) system call more expensive, as it
involves an atomic get and put for each call.
Similarly to how we allow registering normal file descriptors to avoid
this overhead, add support for an io_uring_register(2) API that allows
to register the ring fds themselves:
1) IORING_REGISTER_RING_FDS - takes an array of io_uring_rsrc_update
structs, and registers them with the task.
2) IORING_UNREGISTER_RING_FDS - takes an array of io_uring_src_update
structs, and unregisters them.
When a ring fd is registered, it is internally represented by an offset.
This offset is returned to the application, and the application then
uses this offset and sets IORING_ENTER_REGISTERED_RING for the
io_uring_enter(2) system call. This works just like using a registered
file descriptor, rather than a real one, in an SQE, where
IOSQE_FIXED_FILE gets set to tell io_uring that we're using an internal
offset/descriptor rather than a real file descriptor.
In initial testing, this provides a nice bump in performance for
threaded applications in real world cases where the batch count (eg
number of requests submitted per io_uring_enter(2) invocation) is low.
In a microbenchmark, submitting NOP requests, we see the following
increases in performance:
Requests per syscall Baseline Registered Increase
----------------------------------------------------------------
1 ~7030K ~8080K +15%
2 ~13120K ~14800K +13%
4 ~22740K ~25300K +11%
Co-developed-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-03-04 15:22:22 +00:00
|
|
|
}
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
2023-11-28 17:29:58 +00:00
|
|
|
ctx = file->private_data;
|
2020-08-27 14:58:31 +00:00
|
|
|
ret = -EBADFD;
|
2021-03-19 17:22:30 +00:00
|
|
|
if (unlikely(ctx->flags & IORING_SETUP_R_DISABLED))
|
2020-08-27 14:58:31 +00:00
|
|
|
goto out;
|
|
|
|
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 18:22:30 +00:00
|
|
|
/*
|
|
|
|
* For SQ polling, the thread will do all submissions and completions.
|
|
|
|
* Just return the requested submit count, and wake the thread if
|
|
|
|
* we were asked to.
|
|
|
|
*/
|
2019-09-12 20:19:16 +00:00
|
|
|
ret = 0;
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 18:22:30 +00:00
|
|
|
if (ctx->flags & IORING_SETUP_SQPOLL) {
|
2021-08-14 15:04:40 +00:00
|
|
|
if (unlikely(ctx->sq_data->thread == NULL)) {
|
|
|
|
ret = -EOWNERDEAD;
|
2021-03-07 10:54:29 +00:00
|
|
|
goto out;
|
2021-08-14 15:04:40 +00:00
|
|
|
}
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 18:22:30 +00:00
|
|
|
if (flags & IORING_ENTER_SQ_WAKEUP)
|
2020-09-02 19:52:19 +00:00
|
|
|
wake_up(&ctx->sq_data->wait);
|
2023-01-15 07:15:19 +00:00
|
|
|
if (flags & IORING_ENTER_SQ_WAIT)
|
|
|
|
io_sqpoll_wait_sq(ctx);
|
|
|
|
|
2022-04-21 09:13:42 +00:00
|
|
|
ret = to_submit;
|
2019-09-12 20:19:16 +00:00
|
|
|
} else if (to_submit) {
|
2021-06-14 01:36:15 +00:00
|
|
|
ret = io_uring_add_tctx_node(ctx);
|
2020-09-13 19:09:39 +00:00
|
|
|
if (unlikely(ret))
|
|
|
|
goto out;
|
2019-12-18 16:53:45 +00:00
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
mutex_lock(&ctx->uring_lock);
|
2022-04-21 09:13:42 +00:00
|
|
|
ret = io_submit_sqes(ctx, to_submit);
|
|
|
|
if (ret != to_submit) {
|
2022-03-22 14:07:58 +00:00
|
|
|
mutex_unlock(&ctx->uring_lock);
|
2019-12-18 16:53:45 +00:00
|
|
|
goto out;
|
2022-03-22 14:07:58 +00:00
|
|
|
}
|
2022-10-06 20:42:33 +00:00
|
|
|
if (flags & IORING_ENTER_GETEVENTS) {
|
|
|
|
if (ctx->syscall_iopoll)
|
|
|
|
goto iopoll_locked;
|
|
|
|
/*
|
|
|
|
* Ignore errors, we'll soon call io_cqring_wait() and
|
|
|
|
* it should handle ownership problems if any.
|
|
|
|
*/
|
|
|
|
if (ctx->flags & IORING_SETUP_DEFER_TASKRUN)
|
2024-01-31 17:50:08 +00:00
|
|
|
(void)io_run_local_work_locked(ctx, min_complete);
|
2022-10-06 20:42:33 +00:00
|
|
|
}
|
2022-03-22 14:07:58 +00:00
|
|
|
mutex_unlock(&ctx->uring_lock);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
}
|
2022-08-30 12:50:10 +00:00
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
if (flags & IORING_ENTER_GETEVENTS) {
|
2022-04-21 09:13:42 +00:00
|
|
|
int ret2;
|
2022-08-30 12:50:10 +00:00
|
|
|
|
2022-03-22 14:07:57 +00:00
|
|
|
if (ctx->syscall_iopoll) {
|
2022-03-22 14:07:58 +00:00
|
|
|
/*
|
|
|
|
* We disallow the app entering submit/complete with
|
|
|
|
* polling, but we still need to lock the ring to
|
|
|
|
* prevent racing with polled issue that got punted to
|
|
|
|
* a workqueue.
|
|
|
|
*/
|
|
|
|
mutex_lock(&ctx->uring_lock);
|
|
|
|
iopoll_locked:
|
2022-04-21 09:13:42 +00:00
|
|
|
ret2 = io_validate_ext_arg(flags, argp, argsz);
|
|
|
|
if (likely(!ret2)) {
|
|
|
|
min_complete = min(min_complete,
|
|
|
|
ctx->cq_entries);
|
|
|
|
ret2 = io_iopoll_check(ctx, min_complete);
|
2022-03-22 14:07:58 +00:00
|
|
|
}
|
|
|
|
mutex_unlock(&ctx->uring_lock);
|
2019-01-09 15:59:42 +00:00
|
|
|
} else {
|
2024-08-15 22:13:32 +00:00
|
|
|
struct ext_arg ext_arg = { .argsz = argsz };
|
2022-03-22 14:07:56 +00:00
|
|
|
|
2024-08-15 22:13:32 +00:00
|
|
|
ret2 = io_get_ext_arg(flags, argp, &ext_arg);
|
2022-04-21 09:13:42 +00:00
|
|
|
if (likely(!ret2)) {
|
|
|
|
min_complete = min(min_complete,
|
|
|
|
ctx->cq_entries);
|
2024-08-07 14:18:13 +00:00
|
|
|
ret2 = io_cqring_wait(ctx, min_complete, flags,
|
2024-08-15 22:13:32 +00:00
|
|
|
&ext_arg);
|
2022-04-21 09:13:42 +00:00
|
|
|
}
|
2019-01-09 15:59:42 +00:00
|
|
|
}
|
2020-11-03 02:54:37 +00:00
|
|
|
|
2022-04-21 09:13:44 +00:00
|
|
|
if (!ret) {
|
2022-04-21 09:13:42 +00:00
|
|
|
ret = ret2;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
2022-04-21 09:13:44 +00:00
|
|
|
/*
|
|
|
|
* EBADR indicates that one or more CQE were dropped.
|
|
|
|
* Once the user has been informed we can clear the bit
|
|
|
|
* as they are obviously ok with those drops.
|
|
|
|
*/
|
|
|
|
if (unlikely(ret2 == -EBADR))
|
|
|
|
clear_bit(IO_CHECK_CQ_DROPPED_BIT,
|
|
|
|
&ctx->check_cq);
|
2019-01-09 15:59:42 +00:00
|
|
|
}
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
}
|
2019-12-18 16:53:45 +00:00
|
|
|
out:
|
2023-11-28 17:29:58 +00:00
|
|
|
if (!(flags & IORING_ENTER_REGISTERED_RING))
|
|
|
|
fput(file);
|
2022-04-21 09:13:42 +00:00
|
|
|
return ret;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static const struct file_operations io_uring_fops = {
|
|
|
|
.release = io_uring_release,
|
|
|
|
.mmap = io_uring_mmap,
|
2024-03-27 20:59:09 +00:00
|
|
|
.get_unmapped_area = io_uring_get_unmapped_area,
|
2019-11-28 11:53:22 +00:00
|
|
|
#ifndef CONFIG_MMU
|
|
|
|
.mmap_capabilities = io_uring_nommu_mmap_capabilities,
|
|
|
|
#endif
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
.poll = io_uring_poll,
|
2020-02-26 17:38:32 +00:00
|
|
|
#ifdef CONFIG_PROC_FS
|
2020-01-30 15:25:34 +00:00
|
|
|
.show_fdinfo = io_uring_show_fdinfo,
|
2020-02-26 17:38:32 +00:00
|
|
|
#endif
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
};
|
|
|
|
|
2022-05-25 17:48:35 +00:00
|
|
|
bool io_is_uring_fops(struct file *file)
|
|
|
|
{
|
|
|
|
return file->f_op == &io_uring_fops;
|
|
|
|
}
|
|
|
|
|
2021-10-04 19:02:54 +00:00
|
|
|
static __cold int io_allocate_scq_urings(struct io_ring_ctx *ctx,
|
|
|
|
struct io_uring_params *p)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
{
|
2019-08-26 17:23:46 +00:00
|
|
|
struct io_rings *rings;
|
|
|
|
size_t size, sq_array_offset;
|
2021-11-05 23:13:52 +00:00
|
|
|
void *ptr;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
2020-08-05 18:58:23 +00:00
|
|
|
/* make sure these are sane, as we already accounted them */
|
|
|
|
ctx->sq_entries = p->sq_entries;
|
|
|
|
ctx->cq_entries = p->cq_entries;
|
|
|
|
|
2022-04-26 18:21:25 +00:00
|
|
|
size = rings_size(ctx, p->sq_entries, p->cq_entries, &sq_array_offset);
|
2019-08-26 17:23:46 +00:00
|
|
|
if (size == SIZE_MAX)
|
|
|
|
return -EOVERFLOW;
|
|
|
|
|
io_uring: support for user allocated memory for rings/sqes
Currently io_uring applications must call mmap(2) twice to map the rings
themselves, and the sqes array. This works fine, but it does not support
using huge pages to back the rings/sqes.
Provide a way for the application to pass in pre-allocated memory for
the rings/sqes, which can then suitably be allocated from shmfs or
via mmap to get huge page support.
Particularly for larger rings, this reduces the TLBs needed.
If an application wishes to take advantage of that, it must pre-allocate
the memory needed for the sq/cq ring, and the sqes. The former must
be passed in via the io_uring_params->cq_off.user_data field, while the
latter is passed in via the io_uring_params->sq_off.user_data field. Then
it must set IORING_SETUP_NO_MMAP in the io_uring_params->flags field,
and io_uring will then map the existing memory into the kernel for shared
use. The application must not call mmap(2) to map rings as it otherwise
would have, that will now fail with -EINVAL if this setup flag was used.
The pages used for the rings and sqes must be contigious. The intent here
is clearly that huge pages should be used, otherwise the normal setup
procedure works fine as-is. The application may use one huge page for
both the rings and sqes.
Outside of those initialization changes, everything works like it did
before.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-11-05 23:20:54 +00:00
|
|
|
if (!(ctx->flags & IORING_SETUP_NO_MMAP))
|
2024-03-13 15:56:14 +00:00
|
|
|
rings = io_pages_map(&ctx->ring_pages, &ctx->n_ring_pages, size);
|
io_uring: support for user allocated memory for rings/sqes
Currently io_uring applications must call mmap(2) twice to map the rings
themselves, and the sqes array. This works fine, but it does not support
using huge pages to back the rings/sqes.
Provide a way for the application to pass in pre-allocated memory for
the rings/sqes, which can then suitably be allocated from shmfs or
via mmap to get huge page support.
Particularly for larger rings, this reduces the TLBs needed.
If an application wishes to take advantage of that, it must pre-allocate
the memory needed for the sq/cq ring, and the sqes. The former must
be passed in via the io_uring_params->cq_off.user_data field, while the
latter is passed in via the io_uring_params->sq_off.user_data field. Then
it must set IORING_SETUP_NO_MMAP in the io_uring_params->flags field,
and io_uring will then map the existing memory into the kernel for shared
use. The application must not call mmap(2) to map rings as it otherwise
would have, that will now fail with -EINVAL if this setup flag was used.
The pages used for the rings and sqes must be contigious. The intent here
is clearly that huge pages should be used, otherwise the normal setup
procedure works fine as-is. The application may use one huge page for
both the rings and sqes.
Outside of those initialization changes, everything works like it did
before.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-11-05 23:20:54 +00:00
|
|
|
else
|
|
|
|
rings = io_rings_map(ctx, p->cq_off.user_addr, size);
|
|
|
|
|
2021-11-05 23:13:52 +00:00
|
|
|
if (IS_ERR(rings))
|
|
|
|
return PTR_ERR(rings);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
2019-08-26 17:23:46 +00:00
|
|
|
ctx->rings = rings;
|
2023-08-24 22:53:32 +00:00
|
|
|
if (!(ctx->flags & IORING_SETUP_NO_SQARRAY))
|
|
|
|
ctx->sq_array = (u32 *)((char *)rings + sq_array_offset);
|
2019-08-26 17:23:46 +00:00
|
|
|
rings->sq_ring_mask = p->sq_entries - 1;
|
|
|
|
rings->cq_ring_mask = p->cq_entries - 1;
|
|
|
|
rings->sq_ring_entries = p->sq_entries;
|
|
|
|
rings->cq_ring_entries = p->cq_entries;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
2022-04-01 01:27:52 +00:00
|
|
|
if (p->flags & IORING_SETUP_SQE128)
|
|
|
|
size = array_size(2 * sizeof(struct io_uring_sqe), p->sq_entries);
|
|
|
|
else
|
|
|
|
size = array_size(sizeof(struct io_uring_sqe), p->sq_entries);
|
2019-11-20 16:26:29 +00:00
|
|
|
if (size == SIZE_MAX) {
|
2021-11-05 23:15:46 +00:00
|
|
|
io_rings_free(ctx);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
return -EOVERFLOW;
|
2019-11-20 16:26:29 +00:00
|
|
|
}
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
io_uring: support for user allocated memory for rings/sqes
Currently io_uring applications must call mmap(2) twice to map the rings
themselves, and the sqes array. This works fine, but it does not support
using huge pages to back the rings/sqes.
Provide a way for the application to pass in pre-allocated memory for
the rings/sqes, which can then suitably be allocated from shmfs or
via mmap to get huge page support.
Particularly for larger rings, this reduces the TLBs needed.
If an application wishes to take advantage of that, it must pre-allocate
the memory needed for the sq/cq ring, and the sqes. The former must
be passed in via the io_uring_params->cq_off.user_data field, while the
latter is passed in via the io_uring_params->sq_off.user_data field. Then
it must set IORING_SETUP_NO_MMAP in the io_uring_params->flags field,
and io_uring will then map the existing memory into the kernel for shared
use. The application must not call mmap(2) to map rings as it otherwise
would have, that will now fail with -EINVAL if this setup flag was used.
The pages used for the rings and sqes must be contigious. The intent here
is clearly that huge pages should be used, otherwise the normal setup
procedure works fine as-is. The application may use one huge page for
both the rings and sqes.
Outside of those initialization changes, everything works like it did
before.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-11-05 23:20:54 +00:00
|
|
|
if (!(ctx->flags & IORING_SETUP_NO_MMAP))
|
2024-03-13 15:56:14 +00:00
|
|
|
ptr = io_pages_map(&ctx->sqe_pages, &ctx->n_sqe_pages, size);
|
io_uring: support for user allocated memory for rings/sqes
Currently io_uring applications must call mmap(2) twice to map the rings
themselves, and the sqes array. This works fine, but it does not support
using huge pages to back the rings/sqes.
Provide a way for the application to pass in pre-allocated memory for
the rings/sqes, which can then suitably be allocated from shmfs or
via mmap to get huge page support.
Particularly for larger rings, this reduces the TLBs needed.
If an application wishes to take advantage of that, it must pre-allocate
the memory needed for the sq/cq ring, and the sqes. The former must
be passed in via the io_uring_params->cq_off.user_data field, while the
latter is passed in via the io_uring_params->sq_off.user_data field. Then
it must set IORING_SETUP_NO_MMAP in the io_uring_params->flags field,
and io_uring will then map the existing memory into the kernel for shared
use. The application must not call mmap(2) to map rings as it otherwise
would have, that will now fail with -EINVAL if this setup flag was used.
The pages used for the rings and sqes must be contigious. The intent here
is clearly that huge pages should be used, otherwise the normal setup
procedure works fine as-is. The application may use one huge page for
both the rings and sqes.
Outside of those initialization changes, everything works like it did
before.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-11-05 23:20:54 +00:00
|
|
|
else
|
|
|
|
ptr = io_sqes_map(ctx, p->sq_off.user_addr, size);
|
|
|
|
|
2021-11-05 23:13:52 +00:00
|
|
|
if (IS_ERR(ptr)) {
|
2021-11-05 23:15:46 +00:00
|
|
|
io_rings_free(ctx);
|
2021-11-05 23:13:52 +00:00
|
|
|
return PTR_ERR(ptr);
|
2019-11-20 16:26:29 +00:00
|
|
|
}
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
2021-11-05 23:13:52 +00:00
|
|
|
ctx->sq_sqes = ptr;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2023-04-28 16:40:30 +00:00
|
|
|
static int io_uring_install_fd(struct file *file)
|
2020-12-21 18:34:05 +00:00
|
|
|
{
|
2023-04-28 16:40:30 +00:00
|
|
|
int fd;
|
2020-12-21 18:34:05 +00:00
|
|
|
|
|
|
|
fd = get_unused_fd_flags(O_RDWR | O_CLOEXEC);
|
|
|
|
if (fd < 0)
|
|
|
|
return fd;
|
|
|
|
fd_install(fd, file);
|
|
|
|
return fd;
|
|
|
|
}
|
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
/*
|
|
|
|
* Allocate an anonymous fd, this is what constitutes the application
|
|
|
|
* visible backing of an io_uring instance. The application mmaps this
|
2023-12-19 19:36:34 +00:00
|
|
|
* fd to gain access to the SQ/CQ ring details.
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
*/
|
2020-12-21 18:34:05 +00:00
|
|
|
static struct file *io_uring_get_file(struct io_ring_ctx *ctx)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
{
|
fs: Rename anon_inode_getfile_secure() and anon_inode_getfd_secure()
The call to the inode_init_security_anon() LSM hook is not the sole
reason to use anon_inode_getfile_secure() or anon_inode_getfd_secure().
For example, the functions also allow one to create a file with non-zero
size, without needing a full-blown filesystem. In this case, you don't
need a "secure" version, just unique inodes; the current name of the
functions is confusing and does not explain well the difference with
the more "standard" anon_inode_getfile() and anon_inode_getfd().
Of course, there is another side of the coin; neither io_uring nor
userfaultfd strictly speaking need distinct inodes, and it is not
that clear anymore that anon_inode_create_get{file,fd}() allow the LSM
to intercept and block the inode's creation. If one was so inclined,
anon_inode_getfile_secure() and anon_inode_getfd_secure() could be kept,
using the shared inode or a new one depending on CONFIG_SECURITY.
However, this is probably overkill, and potentially a cause of bugs in
different configurations. Therefore, just add a comment to io_uring
and userfaultfd explaining the choice of the function.
While at it, remove the export for what is now anon_inode_create_getfd().
There is no in-tree module that uses it, and the old name is gone anyway.
If anybody actually needs the symbol, they can ask or they can just use
anon_inode_create_getfile(), which will be exported very soon for use
in KVM.
Suggested-by: Christian Brauner <brauner@kernel.org>
Reviewed-by: Christian Brauner <brauner@kernel.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-11-03 10:47:51 +00:00
|
|
|
/* Create a new inode so that the LSM can block the creation. */
|
Generic:
- Use memdup_array_user() to harden against overflow.
- Unconditionally advertise KVM_CAP_DEVICE_CTRL for all architectures.
- Clean up Kconfigs that all KVM architectures were selecting
- New functionality around "guest_memfd", a new userspace API that
creates an anonymous file and returns a file descriptor that refers
to it. guest_memfd files are bound to their owning virtual machine,
cannot be mapped, read, or written by userspace, and cannot be resized.
guest_memfd files do however support PUNCH_HOLE, which can be used to
switch a memory area between guest_memfd and regular anonymous memory.
- New ioctl KVM_SET_MEMORY_ATTRIBUTES allowing userspace to specify
per-page attributes for a given page of guest memory; right now the
only attribute is whether the guest expects to access memory via
guest_memfd or not, which in Confidential SVMs backed by SEV-SNP,
TDX or ARM64 pKVM is checked by firmware or hypervisor that guarantees
confidentiality (AMD PSP, Intel TDX module, or EL2 in the case of pKVM).
x86:
- Support for "software-protected VMs" that can use the new guest_memfd
and page attributes infrastructure. This is mostly useful for testing,
since there is no pKVM-like infrastructure to provide a meaningfully
reduced TCB.
- Fix a relatively benign off-by-one error when splitting huge pages during
CLEAR_DIRTY_LOG.
- Fix a bug where KVM could incorrectly test-and-clear dirty bits in non-leaf
TDP MMU SPTEs if a racing thread replaces a huge SPTE with a non-huge SPTE.
- Use more generic lockdep assertions in paths that don't actually care
about whether the caller is a reader or a writer.
- let Xen guests opt out of having PV clock reported as "based on a stable TSC",
because some of them don't expect the "TSC stable" bit (added to the pvclock
ABI by KVM, but never set by Xen) to be set.
- Revert a bogus, made-up nested SVM consistency check for TLB_CONTROL.
- Advertise flush-by-ASID support for nSVM unconditionally, as KVM always
flushes on nested transitions, i.e. always satisfies flush requests. This
allows running bleeding edge versions of VMware Workstation on top of KVM.
- Sanity check that the CPU supports flush-by-ASID when enabling SEV support.
- On AMD machines with vNMI, always rely on hardware instead of intercepting
IRET in some cases to detect unmasking of NMIs
- Support for virtualizing Linear Address Masking (LAM)
- Fix a variety of vPMU bugs where KVM fail to stop/reset counters and other state
prior to refreshing the vPMU model.
- Fix a double-overflow PMU bug by tracking emulated counter events using a
dedicated field instead of snapshotting the "previous" counter. If the
hardware PMC count triggers overflow that is recognized in the same VM-Exit
that KVM manually bumps an event count, KVM would pend PMIs for both the
hardware-triggered overflow and for KVM-triggered overflow.
- Turn off KVM_WERROR by default for all configs so that it's not
inadvertantly enabled by non-KVM developers, which can be problematic for
subsystems that require no regressions for W=1 builds.
- Advertise all of the host-supported CPUID bits that enumerate IA32_SPEC_CTRL
"features".
- Don't force a masterclock update when a vCPU synchronizes to the current TSC
generation, as updating the masterclock can cause kvmclock's time to "jump"
unexpectedly, e.g. when userspace hotplugs a pre-created vCPU.
- Use RIP-relative address to read kvm_rebooting in the VM-Enter fault paths,
partly as a super minor optimization, but mostly to make KVM play nice with
position independent executable builds.
- Guard KVM-on-HyperV's range-based TLB flush hooks with an #ifdef on
CONFIG_HYPERV as a minor optimization, and to self-document the code.
- Add CONFIG_KVM_HYPERV to allow disabling KVM support for HyperV "emulation"
at build time.
ARM64:
- LPA2 support, adding 52bit IPA/PA capability for 4kB and 16kB
base granule sizes. Branch shared with the arm64 tree.
- Large Fine-Grained Trap rework, bringing some sanity to the
feature, although there is more to come. This comes with
a prefix branch shared with the arm64 tree.
- Some additional Nested Virtualization groundwork, mostly
introducing the NV2 VNCR support and retargetting the NV
support to that version of the architecture.
- A small set of vgic fixes and associated cleanups.
Loongarch:
- Optimization for memslot hugepage checking
- Cleanup and fix some HW/SW timer issues
- Add LSX/LASX (128bit/256bit SIMD) support
RISC-V:
- KVM_GET_REG_LIST improvement for vector registers
- Generate ISA extension reg_list using macros in get-reg-list selftest
- Support for reporting steal time along with selftest
s390:
- Bugfixes
Selftests:
- Fix an annoying goof where the NX hugepage test prints out garbage
instead of the magic token needed to run the test.
- Fix build errors when a header is delete/moved due to a missing flag
in the Makefile.
- Detect if KVM bugged/killed a selftest's VM and print out a helpful
message instead of complaining that a random ioctl() failed.
- Annotate the guest printf/assert helpers with __printf(), and fix the
various bugs that were lurking due to lack of said annotation.
There are two non-KVM patches buried in the middle of guest_memfd support:
fs: Rename anon_inode_getfile_secure() and anon_inode_getfd_secure()
mm: Add AS_UNMOVABLE to mark mapping as completely unmovable
The first is small and mostly suggested-by Christian Brauner; the second
a bit less so but it was written by an mm person (Vlastimil Babka).
-----BEGIN PGP SIGNATURE-----
iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmWcMWkUHHBib256aW5p
QHJlZGhhdC5jb20ACgkQv/vSX3jHroO15gf/WLmmg3SET6Uzw9iEq2xo28831ZA+
6kpILfIDGKozV5safDmMvcInlc/PTnqOFrsKyyN4kDZ+rIJiafJdg/loE0kPXBML
wdR+2ix5kYI1FucCDaGTahskBDz8Lb/xTpwGg9BFLYFNmuUeHc74o6GoNvr1uliE
4kLZL2K6w0cSMPybUD+HqGaET80ZqPwecv+s1JL+Ia0kYZJONJifoHnvOUJ7DpEi
rgudVdgzt3EPjG0y1z6MjvDBXTCOLDjXajErlYuZD3Ej8N8s59Dh2TxOiDNTLdP4
a4zjRvDmgyr6H6sz+upvwc7f4M4p+DBvf+TkWF54mbeObHUYliStqURIoA==
=66Ws
-----END PGP SIGNATURE-----
Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm
Pull kvm updates from Paolo Bonzini:
"Generic:
- Use memdup_array_user() to harden against overflow.
- Unconditionally advertise KVM_CAP_DEVICE_CTRL for all
architectures.
- Clean up Kconfigs that all KVM architectures were selecting
- New functionality around "guest_memfd", a new userspace API that
creates an anonymous file and returns a file descriptor that refers
to it. guest_memfd files are bound to their owning virtual machine,
cannot be mapped, read, or written by userspace, and cannot be
resized. guest_memfd files do however support PUNCH_HOLE, which can
be used to switch a memory area between guest_memfd and regular
anonymous memory.
- New ioctl KVM_SET_MEMORY_ATTRIBUTES allowing userspace to specify
per-page attributes for a given page of guest memory; right now the
only attribute is whether the guest expects to access memory via
guest_memfd or not, which in Confidential SVMs backed by SEV-SNP,
TDX or ARM64 pKVM is checked by firmware or hypervisor that
guarantees confidentiality (AMD PSP, Intel TDX module, or EL2 in
the case of pKVM).
x86:
- Support for "software-protected VMs" that can use the new
guest_memfd and page attributes infrastructure. This is mostly
useful for testing, since there is no pKVM-like infrastructure to
provide a meaningfully reduced TCB.
- Fix a relatively benign off-by-one error when splitting huge pages
during CLEAR_DIRTY_LOG.
- Fix a bug where KVM could incorrectly test-and-clear dirty bits in
non-leaf TDP MMU SPTEs if a racing thread replaces a huge SPTE with
a non-huge SPTE.
- Use more generic lockdep assertions in paths that don't actually
care about whether the caller is a reader or a writer.
- let Xen guests opt out of having PV clock reported as "based on a
stable TSC", because some of them don't expect the "TSC stable" bit
(added to the pvclock ABI by KVM, but never set by Xen) to be set.
- Revert a bogus, made-up nested SVM consistency check for
TLB_CONTROL.
- Advertise flush-by-ASID support for nSVM unconditionally, as KVM
always flushes on nested transitions, i.e. always satisfies flush
requests. This allows running bleeding edge versions of VMware
Workstation on top of KVM.
- Sanity check that the CPU supports flush-by-ASID when enabling SEV
support.
- On AMD machines with vNMI, always rely on hardware instead of
intercepting IRET in some cases to detect unmasking of NMIs
- Support for virtualizing Linear Address Masking (LAM)
- Fix a variety of vPMU bugs where KVM fail to stop/reset counters
and other state prior to refreshing the vPMU model.
- Fix a double-overflow PMU bug by tracking emulated counter events
using a dedicated field instead of snapshotting the "previous"
counter. If the hardware PMC count triggers overflow that is
recognized in the same VM-Exit that KVM manually bumps an event
count, KVM would pend PMIs for both the hardware-triggered overflow
and for KVM-triggered overflow.
- Turn off KVM_WERROR by default for all configs so that it's not
inadvertantly enabled by non-KVM developers, which can be
problematic for subsystems that require no regressions for W=1
builds.
- Advertise all of the host-supported CPUID bits that enumerate
IA32_SPEC_CTRL "features".
- Don't force a masterclock update when a vCPU synchronizes to the
current TSC generation, as updating the masterclock can cause
kvmclock's time to "jump" unexpectedly, e.g. when userspace
hotplugs a pre-created vCPU.
- Use RIP-relative address to read kvm_rebooting in the VM-Enter
fault paths, partly as a super minor optimization, but mostly to
make KVM play nice with position independent executable builds.
- Guard KVM-on-HyperV's range-based TLB flush hooks with an #ifdef on
CONFIG_HYPERV as a minor optimization, and to self-document the
code.
- Add CONFIG_KVM_HYPERV to allow disabling KVM support for HyperV
"emulation" at build time.
ARM64:
- LPA2 support, adding 52bit IPA/PA capability for 4kB and 16kB base
granule sizes. Branch shared with the arm64 tree.
- Large Fine-Grained Trap rework, bringing some sanity to the
feature, although there is more to come. This comes with a prefix
branch shared with the arm64 tree.
- Some additional Nested Virtualization groundwork, mostly
introducing the NV2 VNCR support and retargetting the NV support to
that version of the architecture.
- A small set of vgic fixes and associated cleanups.
Loongarch:
- Optimization for memslot hugepage checking
- Cleanup and fix some HW/SW timer issues
- Add LSX/LASX (128bit/256bit SIMD) support
RISC-V:
- KVM_GET_REG_LIST improvement for vector registers
- Generate ISA extension reg_list using macros in get-reg-list
selftest
- Support for reporting steal time along with selftest
s390:
- Bugfixes
Selftests:
- Fix an annoying goof where the NX hugepage test prints out garbage
instead of the magic token needed to run the test.
- Fix build errors when a header is delete/moved due to a missing
flag in the Makefile.
- Detect if KVM bugged/killed a selftest's VM and print out a helpful
message instead of complaining that a random ioctl() failed.
- Annotate the guest printf/assert helpers with __printf(), and fix
the various bugs that were lurking due to lack of said annotation"
* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (185 commits)
x86/kvm: Do not try to disable kvmclock if it was not enabled
KVM: x86: add missing "depends on KVM"
KVM: fix direction of dependency on MMU notifiers
KVM: introduce CONFIG_KVM_COMMON
KVM: arm64: Add missing memory barriers when switching to pKVM's hyp pgd
KVM: arm64: vgic-its: Avoid potential UAF in LPI translation cache
RISC-V: KVM: selftests: Add get-reg-list test for STA registers
RISC-V: KVM: selftests: Add steal_time test support
RISC-V: KVM: selftests: Add guest_sbi_probe_extension
RISC-V: KVM: selftests: Move sbi_ecall to processor.c
RISC-V: KVM: Implement SBI STA extension
RISC-V: KVM: Add support for SBI STA registers
RISC-V: KVM: Add support for SBI extension registers
RISC-V: KVM: Add SBI STA info to vcpu_arch
RISC-V: KVM: Add steal-update vcpu request
RISC-V: KVM: Add SBI STA extension skeleton
RISC-V: paravirt: Implement steal-time support
RISC-V: Add SBI STA extension definitions
RISC-V: paravirt: Add skeleton for pv-time support
RISC-V: KVM: Fix indentation in kvm_riscv_vcpu_set_reg_csr()
...
2024-01-17 21:03:37 +00:00
|
|
|
return anon_inode_create_getfile("[io_uring]", &io_uring_fops, ctx,
|
2021-02-02 00:33:52 +00:00
|
|
|
O_RDWR | O_CLOEXEC, NULL);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
}
|
|
|
|
|
2021-10-04 19:02:54 +00:00
|
|
|
static __cold int io_uring_create(unsigned entries, struct io_uring_params *p,
|
|
|
|
struct io_uring_params __user *params)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
{
|
|
|
|
struct io_ring_ctx *ctx;
|
2023-04-28 16:40:30 +00:00
|
|
|
struct io_uring_task *tctx;
|
2020-12-21 18:34:05 +00:00
|
|
|
struct file *file;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
int ret;
|
|
|
|
|
2019-12-28 22:39:54 +00:00
|
|
|
if (!entries)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
return -EINVAL;
|
2019-12-28 22:39:54 +00:00
|
|
|
if (entries > IORING_MAX_ENTRIES) {
|
|
|
|
if (!(p->flags & IORING_SETUP_CLAMP))
|
|
|
|
return -EINVAL;
|
|
|
|
entries = IORING_MAX_ENTRIES;
|
|
|
|
}
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
2023-04-28 16:40:30 +00:00
|
|
|
if ((p->flags & IORING_SETUP_REGISTERED_FD_ONLY)
|
|
|
|
&& !(p->flags & IORING_SETUP_NO_MMAP))
|
|
|
|
return -EINVAL;
|
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
/*
|
|
|
|
* Use twice as many entries for the CQ ring. It's possible for the
|
|
|
|
* application to drive a higher depth than the size of the SQ ring,
|
|
|
|
* since the sqes are only used at submission time. This allows for
|
2019-10-04 18:10:03 +00:00
|
|
|
* some flexibility in overcommitting a bit. If the application has
|
|
|
|
* set IORING_SETUP_CQSIZE, it will have passed in the desired number
|
|
|
|
* of CQ ring entries manually.
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
*/
|
|
|
|
p->sq_entries = roundup_pow_of_two(entries);
|
2019-10-04 18:10:03 +00:00
|
|
|
if (p->flags & IORING_SETUP_CQSIZE) {
|
|
|
|
/*
|
|
|
|
* If IORING_SETUP_CQSIZE is set, we do the same roundup
|
|
|
|
* to a power-of-two, if it isn't already. We do NOT impose
|
|
|
|
* any cq vs sq ring sizing.
|
|
|
|
*/
|
2020-11-24 07:03:03 +00:00
|
|
|
if (!p->cq_entries)
|
2019-10-04 18:10:03 +00:00
|
|
|
return -EINVAL;
|
2019-12-28 22:39:54 +00:00
|
|
|
if (p->cq_entries > IORING_MAX_CQ_ENTRIES) {
|
|
|
|
if (!(p->flags & IORING_SETUP_CLAMP))
|
|
|
|
return -EINVAL;
|
|
|
|
p->cq_entries = IORING_MAX_CQ_ENTRIES;
|
|
|
|
}
|
2020-11-24 07:03:03 +00:00
|
|
|
p->cq_entries = roundup_pow_of_two(p->cq_entries);
|
|
|
|
if (p->cq_entries < p->sq_entries)
|
|
|
|
return -EINVAL;
|
2019-10-04 18:10:03 +00:00
|
|
|
} else {
|
|
|
|
p->cq_entries = 2 * p->sq_entries;
|
|
|
|
}
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
|
|
|
ctx = io_ring_ctx_alloc(p);
|
2021-02-21 23:19:37 +00:00
|
|
|
if (!ctx)
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
return -ENOMEM;
|
2022-03-22 14:07:57 +00:00
|
|
|
|
2024-08-07 14:18:14 +00:00
|
|
|
ctx->clockid = CLOCK_MONOTONIC;
|
|
|
|
ctx->clock_offset = 0;
|
|
|
|
|
2022-12-07 03:53:30 +00:00
|
|
|
if ((ctx->flags & IORING_SETUP_DEFER_TASKRUN) &&
|
|
|
|
!(ctx->flags & IORING_SETUP_IOPOLL) &&
|
|
|
|
!(ctx->flags & IORING_SETUP_SQPOLL))
|
|
|
|
ctx->task_complete = true;
|
|
|
|
|
2023-08-24 22:53:29 +00:00
|
|
|
if (ctx->task_complete || (ctx->flags & IORING_SETUP_IOPOLL))
|
|
|
|
ctx->lockless_cq = true;
|
|
|
|
|
2023-01-09 14:46:09 +00:00
|
|
|
/*
|
|
|
|
* lazy poll_wq activation relies on ->task_complete for synchronisation
|
|
|
|
* purposes, see io_activate_pollwq()
|
|
|
|
*/
|
|
|
|
if (!ctx->task_complete)
|
|
|
|
ctx->poll_activated = true;
|
|
|
|
|
2022-03-22 14:07:57 +00:00
|
|
|
/*
|
|
|
|
* When SETUP_IOPOLL and SETUP_SQPOLL are both enabled, user
|
|
|
|
* space applications don't need to do io completion events
|
|
|
|
* polling again, they can rely on io_sq_thread to do polling
|
|
|
|
* work, which can reduce cpu usage and uring_lock contention.
|
|
|
|
*/
|
|
|
|
if (ctx->flags & IORING_SETUP_IOPOLL &&
|
|
|
|
!(ctx->flags & IORING_SETUP_SQPOLL))
|
|
|
|
ctx->syscall_iopoll = 1;
|
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
ctx->compat = in_compat_syscall();
|
2023-07-18 11:56:07 +00:00
|
|
|
if (!ns_capable_noaudit(&init_user_ns, CAP_IPC_LOCK))
|
2021-02-21 23:19:37 +00:00
|
|
|
ctx->user = get_uid(current_user());
|
2020-09-14 16:45:53 +00:00
|
|
|
|
2022-04-26 01:49:02 +00:00
|
|
|
/*
|
2022-04-26 01:49:03 +00:00
|
|
|
* For SQPOLL, we just need a wakeup, always. For !SQPOLL, if
|
|
|
|
* COOP_TASKRUN is set, then IPIs are never needed by the app.
|
2022-04-26 01:49:02 +00:00
|
|
|
*/
|
2022-04-26 01:49:03 +00:00
|
|
|
ret = -EINVAL;
|
|
|
|
if (ctx->flags & IORING_SETUP_SQPOLL) {
|
|
|
|
/* IPI related flags don't make sense with SQPOLL */
|
2022-04-26 01:49:04 +00:00
|
|
|
if (ctx->flags & (IORING_SETUP_COOP_TASKRUN |
|
2022-08-30 12:50:10 +00:00
|
|
|
IORING_SETUP_TASKRUN_FLAG |
|
|
|
|
IORING_SETUP_DEFER_TASKRUN))
|
2022-04-26 01:49:03 +00:00
|
|
|
goto err;
|
2022-04-26 01:49:02 +00:00
|
|
|
ctx->notify_method = TWA_SIGNAL_NO_IPI;
|
2022-04-26 01:49:03 +00:00
|
|
|
} else if (ctx->flags & IORING_SETUP_COOP_TASKRUN) {
|
|
|
|
ctx->notify_method = TWA_SIGNAL_NO_IPI;
|
|
|
|
} else {
|
2022-08-30 12:50:10 +00:00
|
|
|
if (ctx->flags & IORING_SETUP_TASKRUN_FLAG &&
|
|
|
|
!(ctx->flags & IORING_SETUP_DEFER_TASKRUN))
|
2022-04-26 01:49:04 +00:00
|
|
|
goto err;
|
2022-04-26 01:49:02 +00:00
|
|
|
ctx->notify_method = TWA_SIGNAL;
|
2022-04-26 01:49:03 +00:00
|
|
|
}
|
2022-04-26 01:49:02 +00:00
|
|
|
|
2022-08-30 12:50:10 +00:00
|
|
|
/*
|
|
|
|
* For DEFER_TASKRUN we require the completion task to be the same as the
|
|
|
|
* submission task. This implies that there is only one submitter, so enforce
|
|
|
|
* that.
|
|
|
|
*/
|
|
|
|
if (ctx->flags & IORING_SETUP_DEFER_TASKRUN &&
|
|
|
|
!(ctx->flags & IORING_SETUP_SINGLE_ISSUER)) {
|
|
|
|
goto err;
|
|
|
|
}
|
|
|
|
|
2020-09-14 16:45:53 +00:00
|
|
|
/*
|
|
|
|
* This is just grabbed for accounting purposes. When a process exits,
|
|
|
|
* the mm is exited and dropped before the files, hence we need to hang
|
|
|
|
* on to this mm purely for the purposes of being able to unaccount
|
|
|
|
* memory (locked/pinned vm). It's not used for anything else.
|
|
|
|
*/
|
2020-08-25 13:58:00 +00:00
|
|
|
mmgrab(current->mm);
|
2020-09-14 16:45:53 +00:00
|
|
|
ctx->mm_account = current->mm;
|
2020-08-25 13:58:00 +00:00
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
ret = io_allocate_scq_urings(ctx, p);
|
|
|
|
if (ret)
|
|
|
|
goto err;
|
|
|
|
|
2020-08-27 14:58:31 +00:00
|
|
|
ret = io_sq_offload_create(ctx, p);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
if (ret)
|
|
|
|
goto err;
|
2023-04-11 11:06:07 +00:00
|
|
|
|
|
|
|
ret = io_rsrc_init(ctx);
|
2021-04-29 10:46:48 +00:00
|
|
|
if (ret)
|
|
|
|
goto err;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
2019-08-26 17:23:46 +00:00
|
|
|
p->sq_off.head = offsetof(struct io_rings, sq.head);
|
|
|
|
p->sq_off.tail = offsetof(struct io_rings, sq.tail);
|
|
|
|
p->sq_off.ring_mask = offsetof(struct io_rings, sq_ring_mask);
|
|
|
|
p->sq_off.ring_entries = offsetof(struct io_rings, sq_ring_entries);
|
|
|
|
p->sq_off.flags = offsetof(struct io_rings, sq_flags);
|
|
|
|
p->sq_off.dropped = offsetof(struct io_rings, sq_dropped);
|
2023-08-24 22:53:32 +00:00
|
|
|
if (!(ctx->flags & IORING_SETUP_NO_SQARRAY))
|
|
|
|
p->sq_off.array = (char *)ctx->sq_array - (char *)ctx->rings;
|
2021-11-05 23:11:34 +00:00
|
|
|
p->sq_off.resv1 = 0;
|
io_uring: support for user allocated memory for rings/sqes
Currently io_uring applications must call mmap(2) twice to map the rings
themselves, and the sqes array. This works fine, but it does not support
using huge pages to back the rings/sqes.
Provide a way for the application to pass in pre-allocated memory for
the rings/sqes, which can then suitably be allocated from shmfs or
via mmap to get huge page support.
Particularly for larger rings, this reduces the TLBs needed.
If an application wishes to take advantage of that, it must pre-allocate
the memory needed for the sq/cq ring, and the sqes. The former must
be passed in via the io_uring_params->cq_off.user_data field, while the
latter is passed in via the io_uring_params->sq_off.user_data field. Then
it must set IORING_SETUP_NO_MMAP in the io_uring_params->flags field,
and io_uring will then map the existing memory into the kernel for shared
use. The application must not call mmap(2) to map rings as it otherwise
would have, that will now fail with -EINVAL if this setup flag was used.
The pages used for the rings and sqes must be contigious. The intent here
is clearly that huge pages should be used, otherwise the normal setup
procedure works fine as-is. The application may use one huge page for
both the rings and sqes.
Outside of those initialization changes, everything works like it did
before.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-11-05 23:20:54 +00:00
|
|
|
if (!(ctx->flags & IORING_SETUP_NO_MMAP))
|
|
|
|
p->sq_off.user_addr = 0;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
|
2019-08-26 17:23:46 +00:00
|
|
|
p->cq_off.head = offsetof(struct io_rings, cq.head);
|
|
|
|
p->cq_off.tail = offsetof(struct io_rings, cq.tail);
|
|
|
|
p->cq_off.ring_mask = offsetof(struct io_rings, cq_ring_mask);
|
|
|
|
p->cq_off.ring_entries = offsetof(struct io_rings, cq_ring_entries);
|
|
|
|
p->cq_off.overflow = offsetof(struct io_rings, cq_overflow);
|
|
|
|
p->cq_off.cqes = offsetof(struct io_rings, cqes);
|
2020-05-15 16:38:04 +00:00
|
|
|
p->cq_off.flags = offsetof(struct io_rings, cq_flags);
|
2021-11-05 23:11:34 +00:00
|
|
|
p->cq_off.resv1 = 0;
|
io_uring: support for user allocated memory for rings/sqes
Currently io_uring applications must call mmap(2) twice to map the rings
themselves, and the sqes array. This works fine, but it does not support
using huge pages to back the rings/sqes.
Provide a way for the application to pass in pre-allocated memory for
the rings/sqes, which can then suitably be allocated from shmfs or
via mmap to get huge page support.
Particularly for larger rings, this reduces the TLBs needed.
If an application wishes to take advantage of that, it must pre-allocate
the memory needed for the sq/cq ring, and the sqes. The former must
be passed in via the io_uring_params->cq_off.user_data field, while the
latter is passed in via the io_uring_params->sq_off.user_data field. Then
it must set IORING_SETUP_NO_MMAP in the io_uring_params->flags field,
and io_uring will then map the existing memory into the kernel for shared
use. The application must not call mmap(2) to map rings as it otherwise
would have, that will now fail with -EINVAL if this setup flag was used.
The pages used for the rings and sqes must be contigious. The intent here
is clearly that huge pages should be used, otherwise the normal setup
procedure works fine as-is. The application may use one huge page for
both the rings and sqes.
Outside of those initialization changes, everything works like it did
before.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-11-05 23:20:54 +00:00
|
|
|
if (!(ctx->flags & IORING_SETUP_NO_MMAP))
|
|
|
|
p->cq_off.user_addr = 0;
|
2019-09-06 16:26:21 +00:00
|
|
|
|
2020-05-05 08:28:53 +00:00
|
|
|
p->features = IORING_FEAT_SINGLE_MMAP | IORING_FEAT_NODROP |
|
|
|
|
IORING_FEAT_SUBMIT_STABLE | IORING_FEAT_RW_CUR_POS |
|
2020-06-17 09:53:55 +00:00
|
|
|
IORING_FEAT_CUR_PERSONALITY | IORING_FEAT_FAST_POLL |
|
2020-11-03 02:54:37 +00:00
|
|
|
IORING_FEAT_POLL_32BITS | IORING_FEAT_SQPOLL_NONFIXED |
|
2021-06-10 15:37:38 +00:00
|
|
|
IORING_FEAT_EXT_ARG | IORING_FEAT_NATIVE_WORKERS |
|
2022-04-10 21:13:24 +00:00
|
|
|
IORING_FEAT_RSRC_TAGS | IORING_FEAT_CQE_SKIP |
|
2024-03-05 23:22:04 +00:00
|
|
|
IORING_FEAT_LINKED_FILE | IORING_FEAT_REG_REG_RING |
|
2024-01-04 17:46:30 +00:00
|
|
|
IORING_FEAT_RECVSEND_BUNDLE | IORING_FEAT_MIN_TIMEOUT;
|
2020-05-05 08:28:53 +00:00
|
|
|
|
|
|
|
if (copy_to_user(params, p, sizeof(*p))) {
|
|
|
|
ret = -EFAULT;
|
|
|
|
goto err;
|
|
|
|
}
|
2020-07-30 19:43:53 +00:00
|
|
|
|
2022-09-26 17:09:25 +00:00
|
|
|
if (ctx->flags & IORING_SETUP_SINGLE_ISSUER
|
|
|
|
&& !(ctx->flags & IORING_SETUP_R_DISABLED))
|
2023-01-20 16:38:06 +00:00
|
|
|
WRITE_ONCE(ctx->submitter_task, get_task_struct(current));
|
2022-09-26 17:09:25 +00:00
|
|
|
|
2020-12-21 18:34:05 +00:00
|
|
|
file = io_uring_get_file(ctx);
|
|
|
|
if (IS_ERR(file)) {
|
|
|
|
ret = PTR_ERR(file);
|
|
|
|
goto err;
|
|
|
|
}
|
|
|
|
|
2023-04-28 16:40:30 +00:00
|
|
|
ret = __io_uring_add_tctx_node(ctx);
|
|
|
|
if (ret)
|
|
|
|
goto err_fput;
|
|
|
|
tctx = current->io_uring;
|
|
|
|
|
2019-10-28 15:15:33 +00:00
|
|
|
/*
|
|
|
|
* Install ring fd as the very last thing, so we don't risk someone
|
|
|
|
* having closed it before we finish setup
|
|
|
|
*/
|
2023-04-28 16:40:30 +00:00
|
|
|
if (p->flags & IORING_SETUP_REGISTERED_FD_ONLY)
|
|
|
|
ret = io_ring_add_registered_file(tctx, file, 0, IO_RINGFD_REG_MAX);
|
|
|
|
else
|
|
|
|
ret = io_uring_install_fd(file);
|
|
|
|
if (ret < 0)
|
|
|
|
goto err_fput;
|
2019-10-28 15:15:33 +00:00
|
|
|
|
io_uring: add set of tracing events
To trace io_uring activity one can get an information from workqueue and
io trace events, but looks like some parts could be hard to identify via
this approach. Making what happens inside io_uring more transparent is
important to be able to reason about many aspects of it, hence introduce
the set of tracing events.
All such events could be roughly divided into two categories:
* those, that are helping to understand correctness (from both kernel
and an application point of view). E.g. a ring creation, file
registration, or waiting for available CQE. Proposed approach is to
get a pointer to an original structure of interest (ring context, or
request), and then find relevant events. io_uring_queue_async_work
also exposes a pointer to work_struct, to be able to track down
corresponding workqueue events.
* those, that provide performance related information. Mostly it's about
events that change the flow of requests, e.g. whether an async work
was queued, or delayed due to some dependencies. Another important
case is how io_uring optimizations (e.g. registered files) are
utilized.
Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-10-15 17:02:01 +00:00
|
|
|
trace_io_uring_create(ret, ctx, p->sq_entries, p->cq_entries, p->flags);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
return ret;
|
|
|
|
err:
|
|
|
|
io_ring_ctx_wait_and_kill(ctx);
|
|
|
|
return ret;
|
2023-04-28 16:40:30 +00:00
|
|
|
err_fput:
|
|
|
|
fput(file);
|
|
|
|
return ret;
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Sets up an aio uring context, and returns the fd. Applications asks for a
|
|
|
|
* ring size, we return the actual sq/cq ring sizes (among other things) in the
|
|
|
|
* params structure passed in.
|
|
|
|
*/
|
|
|
|
static long io_uring_setup(u32 entries, struct io_uring_params __user *params)
|
|
|
|
{
|
|
|
|
struct io_uring_params p;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
if (copy_from_user(&p, params, sizeof(p)))
|
|
|
|
return -EFAULT;
|
|
|
|
for (i = 0; i < ARRAY_SIZE(p.resv); i++) {
|
|
|
|
if (p.resv[i])
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
io_uring: add submission polling
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/polled IO, it'll also poll for completions.
By default, we allow 1 second of active spinning. This can by changed
by passing in a different grace period at io_uring_register(2) time.
If the thread exceeds this idle time without having any work to do, it
will set:
sq_ring->flags |= IORING_SQ_NEED_WAKEUP.
The application will have to call io_uring_enter() to start things back
up again. If IO is kept busy, that will never be needed. Basically an
application that has this feature enabled will guard it's
io_uring_enter(2) call with:
read_barrier();
if (*sq_ring->flags & IORING_SQ_NEED_WAKEUP)
io_uring_enter(fd, 0, 0, IORING_ENTER_SQ_WAKEUP);
instead of calling it unconditionally.
It's mandatory to use fixed files with this feature. Failure to do so
will result in the application getting an -EBADF CQ entry when
submitting IO.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-10 18:22:30 +00:00
|
|
|
if (p.flags & ~(IORING_SETUP_IOPOLL | IORING_SETUP_SQPOLL |
|
2019-12-28 22:39:54 +00:00
|
|
|
IORING_SETUP_SQ_AFF | IORING_SETUP_CQSIZE |
|
2020-08-27 14:58:31 +00:00
|
|
|
IORING_SETUP_CLAMP | IORING_SETUP_ATTACH_WQ |
|
2022-04-26 01:49:03 +00:00
|
|
|
IORING_SETUP_R_DISABLED | IORING_SETUP_SUBMIT_ALL |
|
2022-04-01 01:27:52 +00:00
|
|
|
IORING_SETUP_COOP_TASKRUN | IORING_SETUP_TASKRUN_FLAG |
|
2022-06-16 09:22:08 +00:00
|
|
|
IORING_SETUP_SQE128 | IORING_SETUP_CQE32 |
|
io_uring: support for user allocated memory for rings/sqes
Currently io_uring applications must call mmap(2) twice to map the rings
themselves, and the sqes array. This works fine, but it does not support
using huge pages to back the rings/sqes.
Provide a way for the application to pass in pre-allocated memory for
the rings/sqes, which can then suitably be allocated from shmfs or
via mmap to get huge page support.
Particularly for larger rings, this reduces the TLBs needed.
If an application wishes to take advantage of that, it must pre-allocate
the memory needed for the sq/cq ring, and the sqes. The former must
be passed in via the io_uring_params->cq_off.user_data field, while the
latter is passed in via the io_uring_params->sq_off.user_data field. Then
it must set IORING_SETUP_NO_MMAP in the io_uring_params->flags field,
and io_uring will then map the existing memory into the kernel for shared
use. The application must not call mmap(2) to map rings as it otherwise
would have, that will now fail with -EINVAL if this setup flag was used.
The pages used for the rings and sqes must be contigious. The intent here
is clearly that huge pages should be used, otherwise the normal setup
procedure works fine as-is. The application may use one huge page for
both the rings and sqes.
Outside of those initialization changes, everything works like it did
before.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-11-05 23:20:54 +00:00
|
|
|
IORING_SETUP_SINGLE_ISSUER | IORING_SETUP_DEFER_TASKRUN |
|
2023-08-24 22:53:32 +00:00
|
|
|
IORING_SETUP_NO_MMAP | IORING_SETUP_REGISTERED_FD_ONLY |
|
|
|
|
IORING_SETUP_NO_SQARRAY))
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
return -EINVAL;
|
|
|
|
|
2022-04-26 01:49:04 +00:00
|
|
|
return io_uring_create(entries, &p, params);
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
}
|
|
|
|
|
2023-08-21 21:15:52 +00:00
|
|
|
static inline bool io_uring_allowed(void)
|
|
|
|
{
|
|
|
|
int disabled = READ_ONCE(sysctl_io_uring_disabled);
|
|
|
|
kgid_t io_uring_group;
|
|
|
|
|
|
|
|
if (disabled == 2)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
if (disabled == 0 || capable(CAP_SYS_ADMIN))
|
|
|
|
return true;
|
|
|
|
|
|
|
|
io_uring_group = make_kgid(&init_user_ns, sysctl_io_uring_group);
|
|
|
|
if (!gid_valid(io_uring_group))
|
|
|
|
return false;
|
|
|
|
|
|
|
|
return in_group_p(io_uring_group);
|
|
|
|
}
|
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
SYSCALL_DEFINE2(io_uring_setup, u32, entries,
|
|
|
|
struct io_uring_params __user *, params)
|
|
|
|
{
|
2023-08-21 21:15:52 +00:00
|
|
|
if (!io_uring_allowed())
|
|
|
|
return -EPERM;
|
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
return io_uring_setup(entries, params);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int __init io_uring_init(void)
|
|
|
|
{
|
2022-08-11 07:11:16 +00:00
|
|
|
#define __BUILD_BUG_VERIFY_OFFSET_SIZE(stype, eoffset, esize, ename) do { \
|
2020-01-29 13:39:41 +00:00
|
|
|
BUILD_BUG_ON(offsetof(stype, ename) != eoffset); \
|
2022-08-11 07:11:16 +00:00
|
|
|
BUILD_BUG_ON(sizeof_field(stype, ename) != esize); \
|
2020-01-29 13:39:41 +00:00
|
|
|
} while (0)
|
|
|
|
|
|
|
|
#define BUILD_BUG_SQE_ELEM(eoffset, etype, ename) \
|
2022-08-11 07:11:16 +00:00
|
|
|
__BUILD_BUG_VERIFY_OFFSET_SIZE(struct io_uring_sqe, eoffset, sizeof(etype), ename)
|
|
|
|
#define BUILD_BUG_SQE_ELEM_SIZE(eoffset, esize, ename) \
|
|
|
|
__BUILD_BUG_VERIFY_OFFSET_SIZE(struct io_uring_sqe, eoffset, esize, ename)
|
2020-01-29 13:39:41 +00:00
|
|
|
BUILD_BUG_ON(sizeof(struct io_uring_sqe) != 64);
|
|
|
|
BUILD_BUG_SQE_ELEM(0, __u8, opcode);
|
|
|
|
BUILD_BUG_SQE_ELEM(1, __u8, flags);
|
|
|
|
BUILD_BUG_SQE_ELEM(2, __u16, ioprio);
|
|
|
|
BUILD_BUG_SQE_ELEM(4, __s32, fd);
|
|
|
|
BUILD_BUG_SQE_ELEM(8, __u64, off);
|
|
|
|
BUILD_BUG_SQE_ELEM(8, __u64, addr2);
|
2022-08-11 07:11:16 +00:00
|
|
|
BUILD_BUG_SQE_ELEM(8, __u32, cmd_op);
|
|
|
|
BUILD_BUG_SQE_ELEM(12, __u32, __pad1);
|
2020-01-29 13:39:41 +00:00
|
|
|
BUILD_BUG_SQE_ELEM(16, __u64, addr);
|
2020-02-24 08:32:45 +00:00
|
|
|
BUILD_BUG_SQE_ELEM(16, __u64, splice_off_in);
|
2020-01-29 13:39:41 +00:00
|
|
|
BUILD_BUG_SQE_ELEM(24, __u32, len);
|
|
|
|
BUILD_BUG_SQE_ELEM(28, __kernel_rwf_t, rw_flags);
|
|
|
|
BUILD_BUG_SQE_ELEM(28, /* compat */ int, rw_flags);
|
|
|
|
BUILD_BUG_SQE_ELEM(28, /* compat */ __u32, rw_flags);
|
|
|
|
BUILD_BUG_SQE_ELEM(28, __u32, fsync_flags);
|
2020-06-17 09:53:55 +00:00
|
|
|
BUILD_BUG_SQE_ELEM(28, /* compat */ __u16, poll_events);
|
|
|
|
BUILD_BUG_SQE_ELEM(28, __u32, poll32_events);
|
2020-01-29 13:39:41 +00:00
|
|
|
BUILD_BUG_SQE_ELEM(28, __u32, sync_range_flags);
|
|
|
|
BUILD_BUG_SQE_ELEM(28, __u32, msg_flags);
|
|
|
|
BUILD_BUG_SQE_ELEM(28, __u32, timeout_flags);
|
|
|
|
BUILD_BUG_SQE_ELEM(28, __u32, accept_flags);
|
|
|
|
BUILD_BUG_SQE_ELEM(28, __u32, cancel_flags);
|
|
|
|
BUILD_BUG_SQE_ELEM(28, __u32, open_flags);
|
|
|
|
BUILD_BUG_SQE_ELEM(28, __u32, statx_flags);
|
|
|
|
BUILD_BUG_SQE_ELEM(28, __u32, fadvise_advice);
|
2020-02-24 08:32:45 +00:00
|
|
|
BUILD_BUG_SQE_ELEM(28, __u32, splice_flags);
|
2022-08-11 07:11:16 +00:00
|
|
|
BUILD_BUG_SQE_ELEM(28, __u32, rename_flags);
|
|
|
|
BUILD_BUG_SQE_ELEM(28, __u32, unlink_flags);
|
|
|
|
BUILD_BUG_SQE_ELEM(28, __u32, hardlink_flags);
|
|
|
|
BUILD_BUG_SQE_ELEM(28, __u32, xattr_flags);
|
|
|
|
BUILD_BUG_SQE_ELEM(28, __u32, msg_ring_flags);
|
2020-01-29 13:39:41 +00:00
|
|
|
BUILD_BUG_SQE_ELEM(32, __u64, user_data);
|
|
|
|
BUILD_BUG_SQE_ELEM(40, __u16, buf_index);
|
2021-06-24 14:09:58 +00:00
|
|
|
BUILD_BUG_SQE_ELEM(40, __u16, buf_group);
|
2020-01-29 13:39:41 +00:00
|
|
|
BUILD_BUG_SQE_ELEM(42, __u16, personality);
|
2020-02-24 08:32:45 +00:00
|
|
|
BUILD_BUG_SQE_ELEM(44, __s32, splice_fd_in);
|
2021-08-25 11:25:45 +00:00
|
|
|
BUILD_BUG_SQE_ELEM(44, __u32, file_index);
|
2022-09-01 10:54:04 +00:00
|
|
|
BUILD_BUG_SQE_ELEM(44, __u16, addr_len);
|
|
|
|
BUILD_BUG_SQE_ELEM(46, __u16, __pad3[0]);
|
2022-03-23 15:44:19 +00:00
|
|
|
BUILD_BUG_SQE_ELEM(48, __u64, addr3);
|
2022-08-11 07:11:16 +00:00
|
|
|
BUILD_BUG_SQE_ELEM_SIZE(48, 0, cmd);
|
|
|
|
BUILD_BUG_SQE_ELEM(56, __u64, __pad2);
|
2020-01-29 13:39:41 +00:00
|
|
|
|
2021-04-27 15:13:53 +00:00
|
|
|
BUILD_BUG_ON(sizeof(struct io_uring_files_update) !=
|
|
|
|
sizeof(struct io_uring_rsrc_update));
|
|
|
|
BUILD_BUG_ON(sizeof(struct io_uring_rsrc_update) >
|
|
|
|
sizeof(struct io_uring_rsrc_update2));
|
2021-08-25 19:51:40 +00:00
|
|
|
|
|
|
|
/* ->buf_index is u16 */
|
io_uring: add support for ring mapped supplied buffers
Provided buffers allow an application to supply io_uring with buffers
that can then be grabbed for a read/receive request, when the data
source is ready to deliver data. The existing scheme relies on using
IORING_OP_PROVIDE_BUFFERS to do that, but it can be difficult to use
in real world applications. It's pretty efficient if the application
is able to supply back batches of provided buffers when they have been
consumed and the application is ready to recycle them, but if
fragmentation occurs in the buffer space, it can become difficult to
supply enough buffers at the time. This hurts efficiency.
Add a register op, IORING_REGISTER_PBUF_RING, which allows an application
to setup a shared queue for each buffer group of provided buffers. The
application can then supply buffers simply by adding them to this ring,
and the kernel can consume then just as easily. The ring shares the head
with the application, the tail remains private in the kernel.
Provided buffers setup with IORING_REGISTER_PBUF_RING cannot use
IORING_OP_{PROVIDE,REMOVE}_BUFFERS for adding or removing entries to the
ring, they must use the mapped ring. Mapped provided buffer rings can
co-exist with normal provided buffers, just not within the same group ID.
To gauge overhead of the existing scheme and evaluate the mapped ring
approach, a simple NOP benchmark was written. It uses a ring of 128
entries, and submits/completes 32 at the time. 'Replenish' is how
many buffers are provided back at the time after they have been
consumed:
Test Replenish NOPs/sec
================================================================
No provided buffers NA ~30M
Provided buffers 32 ~16M
Provided buffers 1 ~10M
Ring buffers 32 ~27M
Ring buffers 1 ~27M
The ring mapped buffers perform almost as well as not using provided
buffers at all, and they don't care if you provided 1 or more back at
the same time. This means application can just replenish as they go,
rather than need to batch and compact, further reducing overhead in the
application. The NOP benchmark above doesn't need to do any compaction,
so that overhead isn't even reflected in the above test.
Co-developed-by: Dylan Yudaken <dylany@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-04-30 20:38:53 +00:00
|
|
|
BUILD_BUG_ON(offsetof(struct io_uring_buf_ring, bufs) != 0);
|
|
|
|
BUILD_BUG_ON(offsetof(struct io_uring_buf, resv) !=
|
|
|
|
offsetof(struct io_uring_buf_ring, tail));
|
2021-08-25 19:51:40 +00:00
|
|
|
|
2021-04-27 15:13:53 +00:00
|
|
|
/* should fit into one byte */
|
|
|
|
BUILD_BUG_ON(SQE_VALID_FLAGS >= (1 << 8));
|
2021-09-15 11:03:38 +00:00
|
|
|
BUILD_BUG_ON(SQE_COMMON_FLAGS >= (1 << 8));
|
|
|
|
BUILD_BUG_ON((SQE_VALID_FLAGS | SQE_COMMON_FLAGS) != SQE_VALID_FLAGS);
|
2021-04-27 15:13:53 +00:00
|
|
|
|
2024-01-29 03:05:47 +00:00
|
|
|
BUILD_BUG_ON(__REQ_F_LAST_BIT > 8 * sizeof_field(struct io_kiocb, flags));
|
2021-06-24 14:09:58 +00:00
|
|
|
|
2022-04-26 01:49:00 +00:00
|
|
|
BUILD_BUG_ON(sizeof(atomic_t) != sizeof(u32));
|
|
|
|
|
2023-09-28 12:43:24 +00:00
|
|
|
/* top 8bits are for internal use */
|
|
|
|
BUILD_BUG_ON((IORING_URING_CMD_MASK & 0xff000000) != 0);
|
|
|
|
|
2022-06-15 22:27:42 +00:00
|
|
|
io_uring_optable_init();
|
2022-05-23 22:56:21 +00:00
|
|
|
|
2023-08-02 20:38:01 +00:00
|
|
|
/*
|
|
|
|
* Allow user copy in the per-command field, which starts after the
|
|
|
|
* file in io_kiocb and until the opcode field. The openat2 handling
|
|
|
|
* requires copying in user memory into the io_kiocb object in that
|
|
|
|
* range, and HARDENED_USERCOPY will complain if we haven't
|
|
|
|
* correctly annotated this range.
|
|
|
|
*/
|
|
|
|
req_cachep = kmem_cache_create_usercopy("io_kiocb",
|
|
|
|
sizeof(struct io_kiocb), 0,
|
|
|
|
SLAB_HWCACHE_ALIGN | SLAB_PANIC |
|
|
|
|
SLAB_ACCOUNT | SLAB_TYPESAFE_BY_RCU,
|
|
|
|
offsetof(struct io_kiocb, cmd.data),
|
|
|
|
sizeof_field(struct io_kiocb, cmd.data), NULL);
|
2024-01-30 10:02:47 +00:00
|
|
|
io_buf_cachep = KMEM_CACHE(io_buffer,
|
|
|
|
SLAB_HWCACHE_ALIGN | SLAB_PANIC | SLAB_ACCOUNT);
|
2023-08-02 20:38:01 +00:00
|
|
|
|
2024-04-01 21:16:19 +00:00
|
|
|
iou_wq = alloc_workqueue("iou_exit", WQ_UNBOUND, 64);
|
|
|
|
|
2023-08-21 21:15:52 +00:00
|
|
|
#ifdef CONFIG_SYSCTL
|
|
|
|
register_sysctl_init("kernel", kernel_io_uring_disabled_table);
|
|
|
|
#endif
|
|
|
|
|
Add io_uring IO interface
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_cqe data structures. The SQ
ring is an index into the io_uring_sqe array, which makes it possible
to submit a batch of IOs without them being contiguous in the ring.
The CQ ring is always contiguous, as completion events are inherently
unordered, and hence any io_uring_cqe entry can point back to an
arbitrary submission.
Two new system calls are added for this:
io_uring_setup(entries, params)
Sets up an io_uring instance for doing async IO. On success,
returns a file descriptor that the application can mmap to
gain access to the SQ ring, CQ ring, and io_uring_sqes.
io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
Initiates IO against the rings mapped to this fd, or waits for
them to complete, or both. The behavior is controlled by the
parameters passed in. If 'to_submit' is non-zero, then we'll
try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
kernel will wait for 'min_complete' events, if they aren't
already available. It's valid to set IORING_ENTER_GETEVENTS
and 'min_complete' == 0 at the same time, this allows the
kernel to return already completed events without waiting
for them. This is useful only for polling, as for IRQ
driven IO, the application can just check the CQ ring
without entering the kernel.
With this setup, it's possible to do async IO with a single system
call. Future developments will enable polled IO with this interface,
and polled submission as well. The latter will enable an application
to do IO without doing ANY system calls at all.
For IRQ driven IO, an application only needs to enter the kernel for
completions if it wants to wait for them to occur.
Each io_uring is backed by a workqueue, to support buffered async IO
as well. We will only punt to an async context if the command would
need to wait for IO on the device side. Any data that can be accessed
directly in the page cache is done inline. This avoids the slowness
issue of usual threadpools, since cached data is accessed as quickly
as a sync interface.
Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.c
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-01-07 17:46:33 +00:00
|
|
|
return 0;
|
|
|
|
};
|
|
|
|
__initcall(io_uring_init);
|