iommu/arm-smmu: Mask TLBI address correctly
authorRobin Murphy <robin.murphy@arm.com>
Thu, 15 Aug 2019 18:37:21 +0000 (19:37 +0100)
committerWill Deacon <will@kernel.org>
Mon, 19 Aug 2019 15:52:47 +0000 (16:52 +0100)
The less said about "~12UL" the better. Oh dear.

We get away with it due to calling constraints that mean IOVAs are
implicitly at least page-aligned to begin with, but still; oh dear.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
drivers/iommu/arm-smmu.c

index 64977c1..d60ee29 100644 (file)
@@ -504,7 +504,7 @@ static void arm_smmu_tlb_inv_range_nosync(unsigned long iova, size_t size,
                reg += leaf ? ARM_SMMU_CB_S1_TLBIVAL : ARM_SMMU_CB_S1_TLBIVA;
 
                if (cfg->fmt != ARM_SMMU_CTX_FMT_AARCH64) {
-                       iova &= ~12UL;
+                       iova = (iova >> 12) << 12;
                        iova |= cfg->asid;
                        do {
                                writel_relaxed(iova, reg);