linux/block
John Garry 9da3d1e912 block: Add core atomic write support
Add atomic write support, as follows:
- add helper functions to get request_queue atomic write limits
- report request_queue atomic write support limits to sysfs and update Doc
- support to safely merge atomic writes
- deal with splitting atomic writes
- misc helper functions
- add a per-request atomic write flag

New request_queue limits are added, as follows:
- atomic_write_hw_max is set by the block driver and is the maximum length
  of an atomic write which the device may support. It is not
  necessarily a power-of-2.
- atomic_write_max_sectors is derived from atomic_write_hw_max_sectors and
  max_hw_sectors. It is always a power-of-2. Atomic writes may be merged,
  and atomic_write_max_sectors would be the limit on a merged atomic write
  request size. This value is not capped at max_sectors, as the value in
  max_sectors can be controlled from userspace, and it would only cause
  trouble if userspace could limit atomic_write_unit_max_bytes and the
  other atomic write limits.
- atomic_write_hw_unit_{min,max} are set by the block driver and are the
  min/max length of an atomic write unit which the device may support. They
  both must be a power-of-2. Typically atomic_write_hw_unit_max will hold
  the same value as atomic_write_hw_max.
- atomic_write_unit_{min,max} are derived from
  atomic_write_hw_unit_{min,max}, max_hw_sectors, and block core limits.
  Both min and max values must be a power-of-2.
- atomic_write_hw_boundary is set by the block driver. If non-zero, it
  indicates an LBA space boundary at which an atomic write straddles no
  longer is atomically executed by the disk. The value must be a
  power-of-2. Note that it would be acceptable to enforce a rule that
  atomic_write_hw_boundary_sectors is a multiple of
  atomic_write_hw_unit_max, but the resultant code would be more
  complicated.

All atomic writes limits are by default set 0 to indicate no atomic write
support. Even though it is assumed by Linux that a logical block can always
be atomically written, we ignore this as it is not of particular interest.
Stacked devices are just not supported either for now.

An atomic write must always be submitted to the block driver as part of a
single request. As such, only a single BIO must be submitted to the block
layer for an atomic write. When a single atomic write BIO is submitted, it
cannot be split. As such, atomic_write_unit_{max, min}_bytes are limited
by the maximum guaranteed BIO size which will not be required to be split.
This max size is calculated by request_queue max segments and the number
of bvecs a BIO can fit, BIO_MAX_VECS. Currently we rely on userspace
issuing a write with iovcnt=1 for pwritev2() - as such, we can rely on each
segment containing PAGE_SIZE of data, apart from the first+last, which each
can fit logical block size of data. The first+last will be LBS
length/aligned as we rely on direct IO alignment rules also.

New sysfs files are added to report the following atomic write limits:
- atomic_write_unit_max_bytes - same as atomic_write_unit_max_sectors in
				bytes
- atomic_write_unit_min_bytes - same as atomic_write_unit_min_sectors in
				bytes
- atomic_write_boundary_bytes - same as atomic_write_hw_boundary_sectors in
				bytes
- atomic_write_max_bytes      - same as atomic_write_max_sectors in bytes

Atomic writes may only be merged with other atomic writes and only under
the following conditions:
- total resultant request length <= atomic_write_max_bytes
- the merged write does not straddle a boundary

Helper function bdev_can_atomic_write() is added to indicate whether
atomic writes may be issued to a bdev. If a bdev is a partition, the
partition start must be aligned with both atomic_write_unit_min_sectors
and atomic_write_hw_boundary_sectors.

FSes will rely on the block layer to validate that an atomic write BIO
submitted will be of valid size, so add blk_validate_atomic_write_op_size()
for this purpose. Userspace expects an atomic write which is of invalid
size to be rejected with -EINVAL, so add BLK_STS_INVAL for this. Also use
BLK_STS_INVAL for when a BIO needs to be split, as this should mean an
invalid size BIO.

Flag REQ_ATOMIC is used for indicating an atomic write.

Co-developed-by: Himanshu Madhani <himanshu.madhani@oracle.com>
Signed-off-by: Himanshu Madhani <himanshu.madhani@oracle.com>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: John Garry <john.g.garry@oracle.com>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Link: https://lore.kernel.org/r/20240620125359.2684798-6-john.g.garry@oracle.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2024-06-20 15:19:17 -06:00
..
partitions Compactifying bdev flags 2024-05-21 13:02:56 -07:00
badblocks.c badblocks: avoid checking invalid range in badblocks_check() 2023-12-23 18:38:08 -07:00
bdev.c bdev: make blockdev_mnt static 2024-06-16 15:29:55 -06:00
bfq-cgroup.c block, bfq: remove blkg_path() 2024-06-18 09:22:45 -06:00
bfq-iosched.c block: BFQ: Refactor bfq_exit_icq() to silence sparse warning 2024-06-16 15:30:32 -06:00
bfq-iosched.h block, bfq: remove blkg_path() 2024-06-18 09:22:45 -06:00
bfq-wf2q.c
bio-integrity.c block: invert the BLK_INTEGRITY_{GENERATE,VERIFY} flags 2024-06-14 10:20:06 -06:00
bio.c for-6.10-tag 2024-05-14 17:25:36 -07:00
blk-cgroup-fc-appid.c block: Replace all non-returning strlcpy with strscpy 2023-06-01 09:13:31 -06:00
blk-cgroup-rwstat.c blk-cgroup: use group allocation/free of per-cpu counters API 2024-04-03 09:10:17 -06:00
blk-cgroup-rwstat.h
blk-cgroup.c blk-cgroup: Properly propagate the iostat update up the hierarchy 2024-05-15 20:15:54 -06:00
blk-cgroup.h block, bfq: remove blkg_path() 2024-06-18 09:22:45 -06:00
blk-core.c block: Add core atomic write support 2024-06-20 15:19:17 -06:00
blk-crypto-fallback.c block, fs: Restore the per-bio/request data lifetime fields 2024-02-06 14:31:05 +01:00
blk-crypto-internal.h
blk-crypto-profile.c blk-crypto: use dynamic lock class for blk_crypto_profile::lock 2023-07-05 16:36:12 -06:00
blk-crypto-sysfs.c
blk-crypto.c
blk-flush.c block: move cache control settings out of queue->flags 2024-06-19 07:58:28 -06:00
blk-ia-ranges.c
blk-integrity.c block: cleanup flag_{show,store} 2024-06-17 10:13:37 -06:00
blk-ioc.c blk-ioc: fix recursive spin_lock/unlock_irq() in ioc_clear_queue() 2023-06-07 07:51:00 -06:00
blk-iocost.c blk-iocost: do not WARN if iocg was already offlined 2024-04-19 08:06:24 -06:00
blk-iolatency.c block: add blk_time_get_ns() and blk_time_get() helpers 2024-02-05 10:07:22 -07:00
blk-ioprio.c blk-ioprio: Introduce promote-to-rt policy 2023-06-06 22:26:26 -06:00
blk-ioprio.h
blk-lib.c block: add a blk_alloc_discard_bio helper 2024-05-07 07:29:42 -06:00
blk-map.c block: Fix WARNING in _copy_from_iter 2024-01-23 08:56:55 -07:00
blk-merge.c block: Add core atomic write support 2024-06-20 15:19:17 -06:00
blk-mq-cpumap.c
blk-mq-debugfs.c block: move the skip_tagset_quiesce flag to queue_limits 2024-06-19 07:58:28 -06:00
blk-mq-debugfs.h block: Replace zone_wlock debugfs entry with zone_wplugs entry 2024-04-17 08:44:03 -06:00
blk-mq-pci.c
blk-mq-sched.c blk-mq: Remove the hctx 'run' debugfs attribute 2024-01-17 14:16:34 -07:00
blk-mq-sched.h
blk-mq-sysfs.c
blk-mq-tag.c for-6.5/block-2023-06-23 2023-06-26 12:47:20 -07:00
blk-mq-virtio.c
blk-mq.c block: Pass blk_queue_get_max_sectors() a request pointer 2024-06-20 15:19:17 -06:00
blk-mq.h block: Do not special-case plugging of zone write operations 2024-04-17 08:44:03 -06:00
blk-pm.c block: Remove blk_set_runtime_active() 2023-11-20 10:22:40 -07:00
blk-pm.h
blk-rq-qos.c block: correct stale comment in rq_qos_wait 2023-09-18 14:15:28 -06:00
blk-rq-qos.h block: skip QUEUE_FLAG_STATS and rq-qos for passthrough io 2023-12-01 18:29:18 -07:00
blk-settings.c block: Add core atomic write support 2024-06-20 15:19:17 -06:00
blk-stat.c blk-throttle: remove CONFIG_BLK_DEV_THROTTLING_LOW 2024-05-09 09:44:55 -06:00
blk-stat.h block: delete redundant function declaration 2024-05-27 13:58:06 -06:00
blk-sysfs.c block: Add core atomic write support 2024-06-20 15:19:17 -06:00
blk-throttle.c blk-throttle: Fix incorrect display of io.max 2024-05-30 19:44:29 -06:00
blk-throttle.h blk-throttle: Fix incorrect display of io.max 2024-05-30 19:44:29 -06:00
blk-timeout.c
blk-wbt.c block: move cache control settings out of queue->flags 2024-06-19 07:58:28 -06:00
blk-wbt.h blk-wbt: remove the separate write cache tracking 2023-12-26 09:28:10 -07:00
blk-zoned.c block: Improve checks on zone resource limits 2024-06-15 20:42:20 -06:00
blk.h block: Add core atomic write support 2024-06-20 15:19:17 -06:00
bounce.c block, fs: Restore the per-bio/request data lifetime fields 2024-02-06 14:31:05 +01:00
bsg-lib.c scsi: bsg: Pass queue_limits to bsg_setup_queue() 2024-04-11 21:37:48 -04:00
bsg.c SCSI misc on 20230629 2023-06-30 11:57:07 -07:00
disk-events.c block: move bdev_mark_dead out of disk_check_media_change 2023-10-28 13:29:23 +02:00
early-lookup.c wrapper for access to ->bd_partno 2024-05-02 17:48:09 -04:00
elevator.c block: Remove elevator required features 2024-04-17 08:44:03 -06:00
elevator.h block: Remove elevator required features 2024-04-17 08:44:03 -06:00
fops.c bd_inode series 2024-05-21 09:51:42 -07:00
genhd.c Compactifying bdev flags 2024-05-21 13:02:56 -07:00
holder.c block: fix deadlock between bd_link_disk_holder and partition scan 2024-02-23 07:44:19 -07:00
ioctl.c Compactifying bdev flags 2024-05-21 13:02:56 -07:00
ioprio.c block: move __get_task_ioprio() into header file 2024-01-08 12:27:39 -07:00
Kconfig block: remove the blk_integrity_profile structure 2024-06-14 10:20:06 -06:00
Kconfig.iosched
kyber-iosched.c
Makefile block: remove the blk_integrity_profile structure 2024-06-14 10:20:06 -06:00
mq-deadline.c block/mq-deadline: Remove some unused functions 2024-04-19 08:10:36 -06:00
opal_proto.h block: sed-opal: handle empty atoms when parsing response 2024-02-16 15:52:45 -07:00
sed-opal.c for-6.9/block-20240310 2024-03-11 11:43:44 -07:00
t10-pi.c block: move integrity information into queue_limits 2024-06-14 10:20:07 -06:00