ntb: Force physically contiguous allocation of rx ring buffers

Physical addresses under IOVA on x86 platform are mapped contiguously
as a side effect before the patch that removed CONFIG_DMA_REMAP. The
NTB rx buffer ring is a single chunk DMA buffer that is allocated
against the NTB PCI device. If the receive side is using a DMA device,
then the buffers are remapped against the DMA device before being
submitted via the dmaengine API. This scheme becomes a problem when
the physical memory is discontiguous. When dma_map_page() is called
on the kernel virtual address from the dma_alloc_coherent() call, the
new IOVA mapping no longer points to all the physical memory allocated
due to being discontiguous. Change dma_alloc_coherent() to dma_alloc_attrs()
in order to force DMA_ATTR_FORCE_CONTIGUOUS attribute. This is the best
fix for the circumstance. A potential future solution may be having the DMA
mapping API providing a way to alias an existing IOVA mapping to a new
device perhaps.

This fix is not to fix the patch pointed to by the fixes tag, but to fix
the issue arised in the ntb_transport driver on x86 platforms after the
said patch is applied.

Reported-by: Jerry Dai <jerry.dai@intel.com>
Fixes: f5ff79fddf ("dma-mapping: remove CONFIG_DMA_REMAP")
Tested-by: Jerry Dai <jerry.dai@intel.com>
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
This commit is contained in:
Dave Jiang 2024-09-05 14:22:07 -07:00 committed by Jon Mason
parent e51aded92d
commit 061a785a11

View File

@ -809,16 +809,29 @@ static void ntb_free_mw(struct ntb_transport_ctx *nt, int num_mw)
}
static int ntb_alloc_mw_buffer(struct ntb_transport_mw *mw,
struct device *dma_dev, size_t align)
struct device *ntb_dev, size_t align)
{
dma_addr_t dma_addr;
void *alloc_addr, *virt_addr;
int rc;
alloc_addr = dma_alloc_coherent(dma_dev, mw->alloc_size,
&dma_addr, GFP_KERNEL);
/*
* The buffer here is allocated against the NTB device. The reason to
* use dma_alloc_*() call is to allocate a large IOVA contiguous buffer
* backing the NTB BAR for the remote host to write to. During receive
* processing, the data is being copied out of the receive buffer to
* the kernel skbuff. When a DMA device is being used, dma_map_page()
* is called on the kvaddr of the receive buffer (from dma_alloc_*())
* and remapped against the DMA device. It appears to be a double
* DMA mapping of buffers, but first is mapped to the NTB device and
* second is to the DMA device. DMA_ATTR_FORCE_CONTIGUOUS is necessary
* in order for the later dma_map_page() to not fail.
*/
alloc_addr = dma_alloc_attrs(ntb_dev, mw->alloc_size,
&dma_addr, GFP_KERNEL,
DMA_ATTR_FORCE_CONTIGUOUS);
if (!alloc_addr) {
dev_err(dma_dev, "Unable to alloc MW buff of size %zu\n",
dev_err(ntb_dev, "Unable to alloc MW buff of size %zu\n",
mw->alloc_size);
return -ENOMEM;
}
@ -847,7 +860,7 @@ static int ntb_alloc_mw_buffer(struct ntb_transport_mw *mw,
return 0;
err:
dma_free_coherent(dma_dev, mw->alloc_size, alloc_addr, dma_addr);
dma_free_coherent(ntb_dev, mw->alloc_size, alloc_addr, dma_addr);
return rc;
}