arm64: mm: Make flush_tlb_fix_spurious_fault() a no-op
authorWill Deacon <will@kernel.org>
Wed, 30 Sep 2020 12:20:40 +0000 (13:20 +0100)
committerWill Deacon <will@kernel.org>
Thu, 1 Oct 2020 08:45:32 +0000 (09:45 +0100)
Our use of broadcast TLB maintenance means that spurious page-faults
that have been handled already by another CPU do not require additional
TLB maintenance.

Make flush_tlb_fix_spurious_fault() a no-op and rely on the existing TLB
invalidation instead. Add an explicit flush_tlb_page() when making a page
dirty, as the TLB is permitted to cache the old read-only entry.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20200728092220.GA21800@willie-the-truck
Signed-off-by: Will Deacon <will@kernel.org>
arch/arm64/include/asm/pgtable.h
arch/arm64/mm/fault.c

index bc68da9..02ad310 100644 (file)
@@ -51,6 +51,14 @@ extern void __pgd_error(const char *file, int line, unsigned long val);
 #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
 
 /*
+ * Outside of a few very special situations (e.g. hibernation), we always
+ * use broadcast TLB invalidation instructions, therefore a spurious page
+ * fault on one CPU which has been handled concurrently by another CPU
+ * does not need to perform additional invalidation.
+ */
+#define flush_tlb_fix_spurious_fault(vma, address) do { } while (0)
+
+/*
  * ZERO_PAGE is a global shared page that is always zero: used
  * for zero-mapped memory areas etc..
  */
index f07333e..a696a79 100644 (file)
@@ -218,7 +218,9 @@ int ptep_set_access_flags(struct vm_area_struct *vma,
                pteval = cmpxchg_relaxed(&pte_val(*ptep), old_pteval, pteval);
        } while (pteval != old_pteval);
 
-       flush_tlb_fix_spurious_fault(vma, address);
+       /* Invalidate a stale read-only entry */
+       if (dirty)
+               flush_tlb_page(vma, address);
        return 1;
 }