arm64: tlb: Ensure we execute an ISB following walk cache invalidation
authorWill Deacon <will@kernel.org>
Thu, 22 Aug 2019 14:03:45 +0000 (15:03 +0100)
committerWill Deacon <will@kernel.org>
Tue, 27 Aug 2019 16:38:26 +0000 (17:38 +0100)
05f2d2f83b5a ("arm64: tlbflush: Introduce __flush_tlb_kernel_pgtable")
added a new TLB invalidation helper which is used when freeing
intermediate levels of page table used for kernel mappings, but is
missing the required ISB instruction after completion of the TLBI
instruction.

Add the missing barrier.

Cc: <stable@vger.kernel.org>
Fixes: 05f2d2f83b5a ("arm64: tlbflush: Introduce __flush_tlb_kernel_pgtable")
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
arch/arm64/include/asm/tlbflush.h

index 8af7a85..bc39490 100644 (file)
@@ -251,6 +251,7 @@ static inline void __flush_tlb_kernel_pgtable(unsigned long kaddr)
        dsb(ishst);
        __tlbi(vaae1is, addr);
        dsb(ish);
+       isb();
 }
 #endif