aboutsummaryrefslogtreecommitdiff
path: root/arch/riscv/include
diff options
context:
space:
mode:
authorGravatar Jisheng Zhang <jszhang@kernel.org> 2024-03-25 19:00:36 +0800
committerGravatar Palmer Dabbelt <palmer@rivosinc.com> 2024-04-30 10:35:45 -0700
commitdcb2743d1e701fc1a986c187adc11f6148316d21 (patch)
tree33263065e01dc2f7af6fda2665cabb2cf903399c /arch/riscv/include
parentriscv: Annotate pgtable_l{4,5}_enabled with __ro_after_init (diff)
downloadlinux-dcb2743d1e701fc1a986c187adc11f6148316d21.tar.gz
linux-dcb2743d1e701fc1a986c187adc11f6148316d21.tar.bz2
linux-dcb2743d1e701fc1a986c187adc11f6148316d21.zip
riscv: mm: still create swiotlb buffer for kmalloc() bouncing if required
After commit f51f7a0fc2f4 ("riscv: enable DMA_BOUNCE_UNALIGNED_KMALLOC for !dma_coherent"), for non-coherent platforms with less than 4GB memory, we rely on users to pass "swiotlb=mmnn,force" kernel parameters to enable DMA bouncing for unaligned kmalloc() buffers. Now let's go further: If no bouncing needed for ZONE_DMA, let kernel automatically allocate 1MB swiotlb buffer per 1GB of RAM for kmalloc() bouncing on non-coherent platforms, so that no need to pass "swiotlb=mmnn,force" any more. The math of "1MB swiotlb buffer per 1GB of RAM for kmalloc() bouncing" is taken from arm64. Users can still force smaller swiotlb buffer by passing "swiotlb=mmnn". Signed-off-by: Jisheng Zhang <jszhang@kernel.org> Reviewed-by: Alexandre Ghiti <alexghiti@rivosinc.com> Link: https://lore.kernel.org/r/20240325110036.1564-1-jszhang@kernel.org Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
Diffstat (limited to 'arch/riscv/include')
-rw-r--r--arch/riscv/include/asm/cache.h2
1 files changed, 1 insertions, 1 deletions
diff --git a/arch/riscv/include/asm/cache.h b/arch/riscv/include/asm/cache.h
index 2174fe7bac9a..570e9d8acad1 100644
--- a/arch/riscv/include/asm/cache.h
+++ b/arch/riscv/include/asm/cache.h
@@ -26,8 +26,8 @@
#ifndef __ASSEMBLY__
-#ifdef CONFIG_RISCV_DMA_NONCOHERENT
extern int dma_cache_alignment;
+#ifdef CONFIG_RISCV_DMA_NONCOHERENT
#define dma_get_cache_alignment dma_get_cache_alignment
static inline int dma_get_cache_alignment(void)
{