From 37ec8eb851c1876580a963f283fe7496592b9f72 Mon Sep 17 00:00:00 2001 From: Tom Murphy Date: Sun, 8 Sep 2019 09:56:37 -0700 Subject: iommu/amd: Remove unnecessary locking from AMD iommu driver With or without locking it doesn't make sense for two writers to be writing to the same IOVA range at the same time. Even with locking we still have a race condition, whoever gets the lock first, so we still can't be sure what the result will be. With locking the result will be more sane, it will be correct for the last writer, but still useless because we can't be sure which writer will get the lock last. It's a fundamentally broken design to have two writers writing to the same IOVA range at the same time. So we can remove the locking and work on the assumption that no two writers will be writing to the same IOVA range at the same time. The only exception is when we have to allocate a middle page in the page tables, the middle page can cover more than just the IOVA range a writer has been allocated. However this isn't an issue in the AMD driver because it can atomically allocate middle pages using "cmpxchg64()". Signed-off-by: Tom Murphy Signed-off-by: Joerg Roedel --- drivers/iommu/amd_iommu_types.h | 1 - 1 file changed, 1 deletion(-) (limited to 'drivers/iommu/amd_iommu_types.h') diff --git a/drivers/iommu/amd_iommu_types.h b/drivers/iommu/amd_iommu_types.h index c9c1612d52e0..becbd3bd9180 100644 --- a/drivers/iommu/amd_iommu_types.h +++ b/drivers/iommu/amd_iommu_types.h @@ -468,7 +468,6 @@ struct protection_domain { struct iommu_domain domain; /* generic domain handle used by iommu core code */ spinlock_t lock; /* mostly used to lock the page table*/ - struct mutex api_lock; /* protect page tables in the iommu-api path */ u16 id; /* the domain id written to the device table */ int mode; /* paging mode (0-6 levels) */ u64 *pt_root; /* page table root pointer */ -- cgit v1.2.3 From 3332364e4ebc0581d133a334645a20fd13b580f1 Mon Sep 17 00:00:00 2001 From: Logan Gunthorpe Date: Tue, 22 Oct 2019 16:01:20 -0600 Subject: iommu/amd: Support multiple PCI DMA aliases in device table Non-Transparent Bridge (NTB) devices (among others) may have many DMA aliases seeing the hardware will send requests with different device ids depending on their origin across the bridged hardware. See commit ad281ecf1c7d ("PCI: Add DMA alias quirk for Microsemi Switchtec NTB") for more information on this. The AMD IOMMU ignores all the PCI aliases except the last one so DMA transfers from these aliases will be blocked on AMD hardware with the IOMMU enabled. To fix this, ensure the DTEs are cloned for every PCI alias. This is done by copying the DTE data for each alias as well as the IVRS alias every time it is changed. Signed-off-by: Logan Gunthorpe Signed-off-by: Joerg Roedel --- drivers/iommu/amd_iommu_types.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'drivers/iommu/amd_iommu_types.h') diff --git a/drivers/iommu/amd_iommu_types.h b/drivers/iommu/amd_iommu_types.h index becbd3bd9180..b611459f369a 100644 --- a/drivers/iommu/amd_iommu_types.h +++ b/drivers/iommu/amd_iommu_types.h @@ -638,8 +638,8 @@ struct iommu_dev_data { struct list_head list; /* For domain->dev_list */ struct llist_node dev_data_list; /* For global dev_data_list */ struct protection_domain *domain; /* Domain the device is bound to */ + struct pci_dev *pdev; u16 devid; /* PCI Device ID */ - u16 alias; /* Alias Device ID */ bool iommu_v2; /* Device can make use of IOMMUv2 */ bool passthrough; /* Device is identity mapped */ struct { -- cgit v1.2.3