linux/kernel/dma
GuoRui.Yu 7c3940bf81 swiotlb: fix the deadlock in swiotlb_do_find_slots
In general, if swiotlb is sufficient, the logic of index =
wrap_area_index(mem, index + 1) is fine, it will quickly take a slot and
release the area->lock; But if swiotlb is insufficient and the device
has min_align_mask requirements, such as NVME, we may not be able to
satisfy index == wrap and exit the loop properly. In this case, other
kernel threads will not be able to acquire the area->lock and release
the slot, resulting in a deadlock.

The current implementation of wrap_area_index does not involve a modulo
operation, so adjusting the wrap to ensure the loop ends is not trivial.
Introduce a new variable to record the number of loops and exit the loop
after completing the traversal.

Backtraces:
Other CPUs are waiting this core to exit the swiotlb_do_find_slots
loop.
[10199.924391] RIP: 0010:swiotlb_do_find_slots+0x1fe/0x3e0
[10199.924403] Call Trace:
[10199.924404]  <TASK>
[10199.924405]  swiotlb_tbl_map_single+0xec/0x1f0
[10199.924407]  swiotlb_map+0x5c/0x260
[10199.924409]  ? nvme_pci_setup_prps+0x1ed/0x340
[10199.924411]  dma_direct_map_page+0x12e/0x1c0
[10199.924413]  nvme_map_data+0x304/0x370
[10199.924415]  nvme_prep_rq.part.0+0x31/0x120
[10199.924417]  nvme_queue_rq+0x77/0x1f0

...
[ 9639.596311] NMI backtrace for cpu 48
[ 9639.596336] Call Trace:
[ 9639.596337]
[ 9639.596338] _raw_spin_lock_irqsave+0x37/0x40
[ 9639.596341] swiotlb_do_find_slots+0xef/0x3e0
[ 9639.596344] swiotlb_tbl_map_single+0xec/0x1f0
[ 9639.596347] swiotlb_map+0x5c/0x260
[ 9639.596349] dma_direct_map_sg+0x7a/0x280
[ 9639.596352] __dma_map_sg_attrs+0x30/0x70
[ 9639.596355] dma_map_sgtable+0x1d/0x30
[ 9639.596356] nvme_map_data+0xce/0x370

...
[ 9639.595665] NMI backtrace for cpu 50
[ 9639.595682] Call Trace:
[ 9639.595682]
[ 9639.595683] _raw_spin_lock_irqsave+0x37/0x40
[ 9639.595686] swiotlb_release_slots.isra.0+0x86/0x180
[ 9639.595688] dma_direct_unmap_sg+0xcf/0x1a0
[ 9639.595690] nvme_unmap_data.part.0+0x43/0xc0

Fixes: 1f221a0d0d ("swiotlb: respect min_align_mask")
Signed-off-by: GuoRui.Yu <GuoRui.Yu@linux.alibaba.com>
Signed-off-by: Xiaokang Hu <xiaokang.hxk@alibaba-inc.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
2023-03-15 15:47:40 +01:00
..
coherent.c dma-mapping: Add dma_release_coherent_memory to DMA API 2022-06-24 09:30:54 -06:00
contiguous.c cma: factor out minimum alignment requirement 2022-03-22 15:57:05 -07:00
debug.c dma-debug: improve search for partial syncs 2022-09-07 10:38:16 +02:00
debug.h dma-debug: teach add_dma_entry() about DMA_ATTR_SKIP_CPU_SYNC 2021-10-18 12:46:45 +02:00
direct.c dma-mapping updates 2022-08-06 10:56:45 -07:00
direct.h dma-direct: support PCI P2PDMA pages in dma-direct map_sg 2022-07-26 07:27:47 -04:00
dummy.c dma-mapping: return error code from dma_dummy_map_sg() 2021-08-09 17:13:06 +02:00
Kconfig dma-mapping: remove CONFIG_DMA_REMAP 2022-03-03 14:00:57 +03:00
Makefile dma-mapping: remove CONFIG_DMA_REMAP 2022-03-03 14:00:57 +03:00
map_benchmark.c dma-mapping: benchmark: extract a common header file for map_benchmark definition 2022-03-10 07:41:14 +01:00
mapping.c dma-mapping: reject GFP_COMP for noncoherent allocations 2022-12-21 08:45:38 +01:00
ops_helpers.c dma-mapping: handle vmalloc addresses in dma_common_{mmap,get_sgtable} 2021-07-16 11:30:26 +02:00
pool.c dma/pool: create dma atomic pool only if dma zone has managed pages 2022-01-15 16:30:29 +02:00
remap.c kernel/dma: remove unnecessary unmap_kernel_range 2021-04-30 11:20:40 -07:00
swiotlb.c swiotlb: fix the deadlock in swiotlb_do_find_slots 2023-03-15 15:47:40 +01:00