mirror of
https://github.com/torvalds/linux.git
synced 2024-11-24 21:21:41 +00:00
aa00f67adc
Generally applications have 1 or a few waits of waiting, yet they pass in a struct io_uring_getevents_arg every time. This needs to get copied and, in turn, the timeout value needs to get copied. Rather than do this for every invocation, allow the application to register a fixed set of wait regions that can simply be indexed when asking the kernel to wait on events. At ring setup time, the application can register a number of these wait regions and initialize region/index 0 upfront: struct io_uring_reg_wait *reg; reg = io_uring_setup_reg_wait(ring, nr_regions, &ret); /* set timeout and mark as set, sigmask/sigmask_sz as needed */ reg->ts.tv_sec = 0; reg->ts.tv_nsec = 100000; reg->flags = IORING_REG_WAIT_TS; where nr_regions >= 1 && nr_regions <= PAGE_SIZE / sizeof(*reg). The above initializes index 0, but 63 other regions can be initialized, if needed. Now, instead of doing: struct __kernel_timespec timeout = { .tv_nsec = 100000, }; io_uring_submit_and_wait_timeout(ring, &cqe, nr, &t, NULL); to wait for events for each submit_and_wait, or just wait, operation, it can just reference the above region at offset 0 and do: io_uring_submit_and_wait_reg(ring, &cqe, nr, 0); to achieve the same goal of waiting 100usec without needing to copy both struct io_uring_getevents_arg (24b) and struct __kernel_timeout (16b) for each invocation. Struct io_uring_reg_wait looks as follows: struct io_uring_reg_wait { struct __kernel_timespec ts; __u32 min_wait_usec; __u32 flags; __u64 sigmask; __u32 sigmask_sz; __u32 pad[3]; __u64 pad2[2]; }; embedding the timeout itself in the region, rather than passing it as a pointer as well. Note that the signal mask is still passed as a pointer, both for compatability reasons, but also because there doesn't seem to be a lot of high frequency waits scenarios that involve setting and resetting the signal mask for each wait. The application is free to modify any region before a wait call, or it can use keep multiple regions with different settings to avoid needing to modify the same one for wait calls. Up to a page size of regions is mapped by default, allowing PAGE_SIZE / 64 available regions for use. The registered region must fit within a page. On a 4kb page size system, that allows for 64 wait regions if a full page is used, as the size of struct io_uring_reg_wait is 64b. The region registered must be aligned to io_uring_reg_wait in size. It's valid to register less than 64 entries. In network performance testing with zero-copy, this reduced the time spent waiting on the TX side from 3.12% to 0.3% and the RX side from 4.4% to 0.3%. Wait regions are fixed for the lifetime of the ring - once registered, they are persistent until the ring is torn down. The regions support minimum wait timeout as well as the regular waits. Signed-off-by: Jens Axboe <axboe@kernel.dk> |
||
---|---|---|
.. | ||
advise.c | ||
advise.h | ||
alloc_cache.h | ||
cancel.c | ||
cancel.h | ||
epoll.c | ||
epoll.h | ||
eventfd.c | ||
eventfd.h | ||
fdinfo.c | ||
fdinfo.h | ||
filetable.c | ||
filetable.h | ||
fs.c | ||
fs.h | ||
futex.c | ||
futex.h | ||
io_uring.c | ||
io_uring.h | ||
io-wq.c | ||
io-wq.h | ||
kbuf.c | ||
kbuf.h | ||
Makefile | ||
memmap.c | ||
memmap.h | ||
msg_ring.c | ||
msg_ring.h | ||
napi.c | ||
napi.h | ||
net.c | ||
net.h | ||
nop.c | ||
nop.h | ||
notif.c | ||
notif.h | ||
opdef.c | ||
opdef.h | ||
openclose.c | ||
openclose.h | ||
poll.c | ||
poll.h | ||
refs.h | ||
register.c | ||
register.h | ||
rsrc.c | ||
rsrc.h | ||
rw.c | ||
rw.h | ||
slist.h | ||
splice.c | ||
splice.h | ||
sqpoll.c | ||
sqpoll.h | ||
statx.c | ||
statx.h | ||
sync.c | ||
sync.h | ||
tctx.c | ||
tctx.h | ||
timeout.c | ||
timeout.h | ||
truncate.c | ||
truncate.h | ||
uring_cmd.c | ||
uring_cmd.h | ||
waitid.c | ||
waitid.h | ||
xattr.c | ||
xattr.h |