mm: use update_mmu_tlb() on the second thread
authorQi Zheng <zhengqi.arch@bytedance.com>
Thu, 29 Sep 2022 11:23:17 +0000 (19:23 +0800)
committerAndrew Morton <akpm@linux-foundation.org>
Thu, 13 Oct 2022 01:51:50 +0000 (18:51 -0700)
As message in commit 7df676974359 ("mm/memory.c: Update local TLB if PTE
entry exists") said, we should update local TLB only on the second thread.
So in the do_anonymous_page() here, we should use update_mmu_tlb()
instead of update_mmu_cache() on the second thread.

As David pointed out, this is a performance improvement, not a
correctness fix.

Link: https://lkml.kernel.org/r/20220929112318.32393-2-zhengqi.arch@bytedance.com
Signed-off-by: Qi Zheng <zhengqi.arch@bytedance.com>
Reviewed-by: Muchun Song <songmuchun@bytedance.com>
Acked-by: David Hildenbrand <david@redhat.com>
Cc: Bibo Mao <maobibo@loongson.cn>
Cc: Chris Zankel <chris@zankel.net>
Cc: Huacai Chen <chenhuacai@loongson.cn>
Cc: Max Filippov <jcmvbkbc@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
mm/memory.c

index 4ad6077..f88c351 100644 (file)
@@ -4134,7 +4134,7 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf)
        vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, vmf->address,
                        &vmf->ptl);
        if (!pte_none(*vmf->pte)) {
-               update_mmu_cache(vma, vmf->address, vmf->pte);
+               update_mmu_tlb(vma, vmf->address, vmf->pte);
                goto release;
        }