From: Linus Torvalds Date: Sat, 6 Aug 2022 20:24:56 +0000 (-0700) Subject: Revert "iommu/dma: Add config for PCI SAC address trick" X-Git-Tag: v6.6.17~6862 X-Git-Url: http://review.tizen.org/git/?a=commitdiff_plain;h=af3e9579ecfbe1796334bb25a2f0a6437983673a;p=platform%2Fkernel%2Flinux-rpi.git Revert "iommu/dma: Add config for PCI SAC address trick" This reverts commit 4bf7fda4dce22214c70c49960b1b6438e6260b67. It turns out that it was hopelessly naive to think that this would work, considering that we've always done this. The first machine I actually tested this on broke at bootup, getting to Reached target cryptsetup.target - Local Encrypted Volumes. and then hanging. It's unclear what actually fails, since there's a lot else going on around that time (eg amdgpu probing also happens around that same time, but it could be some other random init thing that didn't complete earlier and just caused the boot to hang at that point). The expectations that we should default to some unsafe and untested mode seems entirely unfounded, and the belief that this wouldn't affect modern systems is clearly entirely false. The machine in question is about two years old, so it's not exactly shiny, but it's also not some dusty old museum piece PDP-11 in a closet. Cc: Robin Murphy Cc: Christoph Hellwig Cc: John Garry Cc: Joerg Roedel Signed-off-by: Linus Torvalds --- diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig index 50bc696..5c5cb5b 100644 --- a/drivers/iommu/Kconfig +++ b/drivers/iommu/Kconfig @@ -144,32 +144,6 @@ config IOMMU_DMA select IRQ_MSI_IOMMU select NEED_SG_DMA_LENGTH -config IOMMU_DMA_PCI_SAC - bool "Enable 64-bit legacy PCI optimisation by default" - depends on IOMMU_DMA - help - Enable by default an IOMMU optimisation for 64-bit legacy PCI devices, - wherein the DMA API layer will always first try to allocate a 32-bit - DMA address suitable for a single address cycle, before falling back - to allocating from the device's full usable address range. If your - system has 64-bit legacy PCI devices in 32-bit slots where using dual - address cycles reduces DMA throughput significantly, this may be - beneficial to overall performance. - - If you have a modern PCI Express based system, this feature mostly just - represents extra overhead in the allocation path for no practical - benefit, and it should usually be preferable to say "n" here. - - However, beware that this feature has also historically papered over - bugs where the IOMMU address width and/or device DMA mask is not set - correctly. If device DMA problems and IOMMU faults start occurring - after disabling this option, it is almost certainly indicative of a - latent driver or firmware/BIOS bug, which would previously have only - manifested with several gigabytes worth of concurrent DMA mappings. - - If this option is not set, the feature can still be re-enabled at - boot time with the "iommu.forcedac=0" command-line argument. - # Shared Virtual Addressing config IOMMU_SVA bool diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 376c4e3a..17dd683 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -69,7 +69,7 @@ struct iommu_dma_cookie { }; static DEFINE_STATIC_KEY_FALSE(iommu_deferred_attach_enabled); -bool iommu_dma_forcedac __read_mostly = !IS_ENABLED(CONFIG_IOMMU_DMA_PCI_SAC); +bool iommu_dma_forcedac __read_mostly; static int __init iommu_dma_forcedac_setup(char *str) {