From: Matthew Wilcox (Oracle) Date: Wed, 20 Sep 2023 03:53:35 +0000 (+0100) Subject: mm: report success more often from filemap_map_folio_range() X-Git-Tag: v6.6.7~1801^2~12 X-Git-Url: http://review.tizen.org/git/?a=commitdiff_plain;h=a501a0703044f00180d7697b32cacd7ff46d02d8;p=platform%2Fkernel%2Flinux-starfive.git mm: report success more often from filemap_map_folio_range() Even though we had successfully mapped the relevant page, we would rarely return success from filemap_map_folio_range(). That leads to falling back from the VMA lock path to the mmap_lock path, which is a speed & scalability issue. Found by inspection. Link: https://lkml.kernel.org/r/20230920035336.854212-1-willy@infradead.org Fixes: 617c28ecab22 ("filemap: batch PTE mappings") Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Yin Fengwei Cc: Dave Hansen Cc: David Hildenbrand Cc: Thomas Gleixner Signed-off-by: Andrew Morton --- diff --git a/mm/filemap.c b/mm/filemap.c index 4ea4387..f0a15ce 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -3503,7 +3503,7 @@ skip: if (count) { set_pte_range(vmf, folio, page, count, addr); folio_ref_add(folio, count); - if (in_range(vmf->address, addr, count)) + if (in_range(vmf->address, addr, count * PAGE_SIZE)) ret = VM_FAULT_NOPAGE; } @@ -3517,7 +3517,7 @@ skip: if (count) { set_pte_range(vmf, folio, page, count, addr); folio_ref_add(folio, count); - if (in_range(vmf->address, addr, count)) + if (in_range(vmf->address, addr, count * PAGE_SIZE)) ret = VM_FAULT_NOPAGE; }