aboutsummaryrefslogtreecommitdiff
path: root/arch/x86/kernel/Makefile
diff options
context:
space:
mode:
authorGravatar Robin Murphy <robin.murphy@arm.com> 2022-05-20 18:10:13 +0100
committerGravatar Christoph Hellwig <hch@lst.de> 2022-05-23 15:25:40 +0200
commit4a37f3dd9a83186cb88d44808ab35b78375082c9 (patch)
treeba5dc0010beb736ab375eecc72890bb71a4cb131 /arch/x86/kernel/Makefile
parentswiotlb: max mapping size takes min align mask into account (diff)
downloadlinux-4a37f3dd9a83186cb88d44808ab35b78375082c9.tar.gz
linux-4a37f3dd9a83186cb88d44808ab35b78375082c9.tar.bz2
linux-4a37f3dd9a83186cb88d44808ab35b78375082c9.zip
dma-direct: don't over-decrypt memory
The original x86 sev_alloc() only called set_memory_decrypted() on memory returned by alloc_pages_node(), so the page order calculation fell out of that logic. However, the common dma-direct code has several potential allocators, not all of which are guaranteed to round up the underlying allocation to a power-of-two size, so carrying over that calculation for the encryption/decryption size was a mistake. Fix it by rounding to a *number* of pages, rather than an order. Until recently there was an even worse interaction with DMA_DIRECT_REMAP where we could have ended up decrypting part of the next adjacent vmalloc area, only averted by no architecture actually supporting both configs at once. Don't ask how I found that one out... Fixes: c10f07aa27da ("dma/direct: Handle force decryption for DMA coherent buffers in common code") Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: David Rientjes <rientjes@google.com>
Diffstat (limited to 'arch/x86/kernel/Makefile')
0 files changed, 0 insertions, 0 deletions