mm: mmu_gather: do not expose delayed_rmap flag
authorAlexander Gordeev <agordeev@linux.ibm.com>
Wed, 16 Nov 2022 07:49:30 +0000 (08:49 +0100)
committerAndrew Morton <akpm@linux-foundation.org>
Wed, 30 Nov 2022 23:58:50 +0000 (15:58 -0800)
Flag delayed_rmap of 'struct mmu_gather' is rather a private member, but
it is still accessed directly.  Instead, let the TLB gather code access
the flag.

Link: https://lkml.kernel.org/r/Y3SWCu6NRaMQ5dbD@li-4a3a4a4c-28e5-11b2-a85c-a8d192c6f089.ibm.com
Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
mm/memory.c
mm/mmu_gather.c

index 6c85cba..086cb3d 100644 (file)
@@ -1465,8 +1465,7 @@ again:
        /* Do the actual TLB flush before dropping ptl */
        if (force_flush) {
                tlb_flush_mmu_tlbonly(tlb);
-               if (tlb->delayed_rmap)
-                       tlb_flush_rmaps(tlb, vma);
+               tlb_flush_rmaps(tlb, vma);
        }
        pte_unmap_unlock(start_pte, ptl);
 
index 1de1cf9..dd1f8ca 100644 (file)
@@ -61,6 +61,9 @@ void tlb_flush_rmaps(struct mmu_gather *tlb, struct vm_area_struct *vma)
 {
        struct mmu_gather_batch *batch;
 
+       if (!tlb->delayed_rmap)
+               return;
+
        batch = tlb->active;
        for (int i = 0; i < batch->nr; i++) {
                struct encoded_page *enc = batch->encoded_pages[i];