arm64: tlbflush: add some comments for TLB batched flushing
authorYicong Yang <yangyicong@hisilicon.com>
Tue, 1 Aug 2023 12:42:03 +0000 (20:42 +0800)
committerAndrew Morton <akpm@linux-foundation.org>
Mon, 21 Aug 2023 20:37:32 +0000 (13:37 -0700)
Add comments for arch_flush_tlb_batched_pending() and
arch_tlbbatch_flush() to illustrate why only a DSB is needed.

Link: https://lkml.kernel.org/r/20230801124203.62164-1-yangyicong@huawei.com
Cc: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Yicong Yang <yangyicong@hisilicon.com>
Reviewed-by: Alistair Popple <apopple@nvidia.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Barry Song <21cnbao@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
arch/arm64/include/asm/tlbflush.h

index 84a05a0..55b50e1 100644 (file)
@@ -304,11 +304,26 @@ static inline void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *b
        __flush_tlb_page_nosync(mm, uaddr);
 }
 
+/*
+ * If mprotect/munmap/etc occurs during TLB batched flushing, we need to
+ * synchronise all the TLBI issued with a DSB to avoid the race mentioned in
+ * flush_tlb_batched_pending().
+ */
 static inline void arch_flush_tlb_batched_pending(struct mm_struct *mm)
 {
        dsb(ish);
 }
 
+/*
+ * To support TLB batched flush for multiple pages unmapping, we only send
+ * the TLBI for each page in arch_tlbbatch_add_pending() and wait for the
+ * completion at the end in arch_tlbbatch_flush(). Since we've already issued
+ * TLBI for each page so only a DSB is needed to synchronise its effect on the
+ * other CPUs.
+ *
+ * This will save the time waiting on DSB comparing issuing a TLBI;DSB sequence
+ * for each page.
+ */
 static inline void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch)
 {
        dsb(ish);