mm/gup: cleanup next_page handling
authorPeter Xu <peterx@redhat.com>
Wed, 28 Jun 2023 21:53:06 +0000 (17:53 -0400)
committerAndrew Morton <akpm@linux-foundation.org>
Fri, 18 Aug 2023 17:12:03 +0000 (10:12 -0700)
The only path that doesn't use generic "**pages" handling is the gate vma.
Make it use the same path, meanwhile tune the next_page label upper to
cover "**pages" handling.  This prepares for THP handling for "**pages".

Link: https://lkml.kernel.org/r/20230628215310.73782-5-peterx@redhat.com
Signed-off-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Lorenzo Stoakes <lstoakes@gmail.com>
Acked-by: David Hildenbrand <david@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: James Houghton <jthoughton@google.com>
Cc: Jason Gunthorpe <jgg@nvidia.com>
Cc: John Hubbard <jhubbard@nvidia.com>
Cc: Kirill A . Shutemov <kirill@shutemov.name>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Mike Rapoport (IBM) <rppt@kernel.org>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Yang Shi <shy828301@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
mm/gup.c

index 818d98b..d70f8f0 100644 (file)
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -1207,7 +1207,7 @@ static long __get_user_pages(struct mm_struct *mm,
                        if (!vma && in_gate_area(mm, start)) {
                                ret = get_gate_page(mm, start & PAGE_MASK,
                                                gup_flags, &vma,
-                                               pages ? &pages[i] : NULL);
+                                               pages ? &page : NULL);
                                if (ret)
                                        goto out;
                                ctx.page_mask = 0;
@@ -1277,19 +1277,18 @@ retry:
                                ret = PTR_ERR(page);
                                goto out;
                        }
-
-                       goto next_page;
                } else if (IS_ERR(page)) {
                        ret = PTR_ERR(page);
                        goto out;
                }
+next_page:
                if (pages) {
                        pages[i] = page;
                        flush_anon_page(vma, page, start);
                        flush_dcache_page(page);
                        ctx.page_mask = 0;
                }
-next_page:
+
                page_increm = 1 + (~(start >> PAGE_SHIFT) & ctx.page_mask);
                if (page_increm > nr_pages)
                        page_increm = nr_pages;