forked from Minki/linux
4a37f3dd9a
The original x86 sev_alloc() only called set_memory_decrypted() on
memory returned by alloc_pages_node(), so the page order calculation
fell out of that logic. However, the common dma-direct code has several
potential allocators, not all of which are guaranteed to round up the
underlying allocation to a power-of-two size, so carrying over that
calculation for the encryption/decryption size was a mistake. Fix it by
rounding to a *number* of pages, rather than an order.
Until recently there was an even worse interaction with DMA_DIRECT_REMAP
where we could have ended up decrypting part of the next adjacent
vmalloc area, only averted by no architecture actually supporting both
configs at once. Don't ask how I found that one out...
Fixes:
|
||
---|---|---|
.. | ||
coherent.c | ||
contiguous.c | ||
debug.c | ||
debug.h | ||
direct.c | ||
direct.h | ||
dummy.c | ||
Kconfig | ||
Makefile | ||
map_benchmark.c | ||
mapping.c | ||
ops_helpers.c | ||
pool.c | ||
remap.c | ||
swiotlb.c |