forked from Minki/linux
9087c37584
If a device doesn't support DMA to a physical address that includes the encryption bit (currently bit 47, so 48-bit DMA), then the DMA must occur to unencrypted memory. SWIOTLB is used to satisfy that requirement if an IOMMU is not active (enabled or configured in passthrough mode). However, commitfafadcd165
("swiotlb: don't dip into swiotlb pool for coherent allocations") modified the coherent allocation support in SWIOTLB to use the DMA direct coherent allocation support. When an IOMMU is not active, this resulted in dma_alloc_coherent() failing for devices that didn't support DMA addresses that included the encryption bit. Addressing this requires changes to the force_dma_unencrypted() function in kernel/dma/direct.c. Since the function is now non-trivial and SME/SEV specific, update the DMA direct support to add an arch override for the force_dma_unencrypted() function. The arch override is selected when CONFIG_AMD_MEM_ENCRYPT is set. The arch override function resides in the arch/x86/mm/mem_encrypt.c file and forces unencrypted DMA when either SEV is active or SME is active and the device does not support DMA to physical addresses that include the encryption bit. Fixes:fafadcd165
("swiotlb: don't dip into swiotlb pool for coherent allocations") Suggested-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com> Acked-by: Thomas Gleixner <tglx@linutronix.de> [hch: moved the force_dma_unencrypted declaration to dma-mapping.h, fold the s390 fix from Halil Pasic] Signed-off-by: Christoph Hellwig <hch@lst.de>
186 lines
4.6 KiB
Plaintext
186 lines
4.6 KiB
Plaintext
# SPDX-License-Identifier: GPL-2.0-only
|
|
|
|
config HAS_DMA
|
|
bool
|
|
depends on !NO_DMA
|
|
default y
|
|
|
|
config NEED_SG_DMA_LENGTH
|
|
bool
|
|
|
|
config NEED_DMA_MAP_STATE
|
|
bool
|
|
|
|
config ARCH_DMA_ADDR_T_64BIT
|
|
def_bool 64BIT || PHYS_ADDR_T_64BIT
|
|
|
|
config ARCH_HAS_DMA_COHERENCE_H
|
|
bool
|
|
|
|
config ARCH_HAS_DMA_SET_MASK
|
|
bool
|
|
|
|
config DMA_DECLARE_COHERENT
|
|
bool
|
|
|
|
config ARCH_HAS_SETUP_DMA_OPS
|
|
bool
|
|
|
|
config ARCH_HAS_TEARDOWN_DMA_OPS
|
|
bool
|
|
|
|
config ARCH_HAS_SYNC_DMA_FOR_DEVICE
|
|
bool
|
|
|
|
config ARCH_HAS_SYNC_DMA_FOR_CPU
|
|
bool
|
|
select NEED_DMA_MAP_STATE
|
|
|
|
config ARCH_HAS_SYNC_DMA_FOR_CPU_ALL
|
|
bool
|
|
|
|
config ARCH_HAS_DMA_PREP_COHERENT
|
|
bool
|
|
|
|
config ARCH_HAS_DMA_COHERENT_TO_PFN
|
|
bool
|
|
|
|
config ARCH_HAS_DMA_MMAP_PGPROT
|
|
bool
|
|
|
|
config ARCH_HAS_FORCE_DMA_UNENCRYPTED
|
|
bool
|
|
|
|
config DMA_NONCOHERENT_CACHE_SYNC
|
|
bool
|
|
|
|
config DMA_VIRT_OPS
|
|
bool
|
|
depends on HAS_DMA
|
|
|
|
config SWIOTLB
|
|
bool
|
|
select NEED_DMA_MAP_STATE
|
|
|
|
config DMA_REMAP
|
|
depends on MMU
|
|
select GENERIC_ALLOCATOR
|
|
bool
|
|
|
|
config DMA_DIRECT_REMAP
|
|
bool
|
|
select DMA_REMAP
|
|
|
|
config DMA_CMA
|
|
bool "DMA Contiguous Memory Allocator"
|
|
depends on HAVE_DMA_CONTIGUOUS && CMA
|
|
help
|
|
This enables the Contiguous Memory Allocator which allows drivers
|
|
to allocate big physically-contiguous blocks of memory for use with
|
|
hardware components that do not support I/O map nor scatter-gather.
|
|
|
|
You can disable CMA by specifying "cma=0" on the kernel's command
|
|
line.
|
|
|
|
For more information see <include/linux/dma-contiguous.h>.
|
|
If unsure, say "n".
|
|
|
|
if DMA_CMA
|
|
comment "Default contiguous memory area size:"
|
|
|
|
config CMA_SIZE_MBYTES
|
|
int "Size in Mega Bytes"
|
|
depends on !CMA_SIZE_SEL_PERCENTAGE
|
|
default 0 if X86
|
|
default 16
|
|
help
|
|
Defines the size (in MiB) of the default memory area for Contiguous
|
|
Memory Allocator. If the size of 0 is selected, CMA is disabled by
|
|
default, but it can be enabled by passing cma=size[MG] to the kernel.
|
|
|
|
|
|
config CMA_SIZE_PERCENTAGE
|
|
int "Percentage of total memory"
|
|
depends on !CMA_SIZE_SEL_MBYTES
|
|
default 0 if X86
|
|
default 10
|
|
help
|
|
Defines the size of the default memory area for Contiguous Memory
|
|
Allocator as a percentage of the total memory in the system.
|
|
If 0 percent is selected, CMA is disabled by default, but it can be
|
|
enabled by passing cma=size[MG] to the kernel.
|
|
|
|
choice
|
|
prompt "Selected region size"
|
|
default CMA_SIZE_SEL_MBYTES
|
|
|
|
config CMA_SIZE_SEL_MBYTES
|
|
bool "Use mega bytes value only"
|
|
|
|
config CMA_SIZE_SEL_PERCENTAGE
|
|
bool "Use percentage value only"
|
|
|
|
config CMA_SIZE_SEL_MIN
|
|
bool "Use lower value (minimum)"
|
|
|
|
config CMA_SIZE_SEL_MAX
|
|
bool "Use higher value (maximum)"
|
|
|
|
endchoice
|
|
|
|
config CMA_ALIGNMENT
|
|
int "Maximum PAGE_SIZE order of alignment for contiguous buffers"
|
|
range 4 12
|
|
default 8
|
|
help
|
|
DMA mapping framework by default aligns all buffers to the smallest
|
|
PAGE_SIZE order which is greater than or equal to the requested buffer
|
|
size. This works well for buffers up to a few hundreds kilobytes, but
|
|
for larger buffers it just a memory waste. With this parameter you can
|
|
specify the maximum PAGE_SIZE order for contiguous buffers. Larger
|
|
buffers will be aligned only to this specified order. The order is
|
|
expressed as a power of two multiplied by the PAGE_SIZE.
|
|
|
|
For example, if your system defaults to 4KiB pages, the order value
|
|
of 8 means that the buffers will be aligned up to 1MiB only.
|
|
|
|
If unsure, leave the default value "8".
|
|
|
|
endif
|
|
|
|
config DMA_API_DEBUG
|
|
bool "Enable debugging of DMA-API usage"
|
|
select NEED_DMA_MAP_STATE
|
|
help
|
|
Enable this option to debug the use of the DMA API by device drivers.
|
|
With this option you will be able to detect common bugs in device
|
|
drivers like double-freeing of DMA mappings or freeing mappings that
|
|
were never allocated.
|
|
|
|
This also attempts to catch cases where a page owned by DMA is
|
|
accessed by the cpu in a way that could cause data corruption. For
|
|
example, this enables cow_user_page() to check that the source page is
|
|
not undergoing DMA.
|
|
|
|
This option causes a performance degradation. Use only if you want to
|
|
debug device drivers and dma interactions.
|
|
|
|
If unsure, say N.
|
|
|
|
config DMA_API_DEBUG_SG
|
|
bool "Debug DMA scatter-gather usage"
|
|
default y
|
|
depends on DMA_API_DEBUG
|
|
help
|
|
Perform extra checking that callers of dma_map_sg() have respected the
|
|
appropriate segment length/boundary limits for the given device when
|
|
preparing DMA scatterlists.
|
|
|
|
This is particularly likely to have been overlooked in cases where the
|
|
dma_map_sg() API is used for general bulk mapping of pages rather than
|
|
preparing literal scatter-gather descriptors, where there is a risk of
|
|
unexpected behaviour from DMA API implementations if the scatterlist
|
|
is technically out-of-spec.
|
|
|
|
If unsure, say N.
|