mm/hugetlb: prepare hugetlb_follow_page_mask() for FOLL_PIN
authorPeter Xu <peterx@redhat.com>
Wed, 28 Jun 2023 21:53:04 +0000 (17:53 -0400)
committerAndrew Morton <akpm@linux-foundation.org>
Fri, 18 Aug 2023 17:12:03 +0000 (10:12 -0700)
follow_page() doesn't use FOLL_PIN, meanwhile hugetlb seems to not be the
target of FOLL_WRITE either.  However add the checks.

Namely, either the need to CoW due to missing write bit, or proper
unsharing on !AnonExclusive pages over R/O pins to reject the follow page.
That brings this function closer to follow_hugetlb_page().

So we don't care before, and also for now.  But we'll care if we switch
over slow-gup to use hugetlb_follow_page_mask().  We'll also care when to
return -EMLINK properly, as that's the gup internal api to mean "we should
unshare".  Not really needed for follow page path, though.

When at it, switching the try_grab_page() to use WARN_ON_ONCE(), to be
clear that it just should never fail.  When error happens, instead of
setting page==NULL, capture the errno instead.

Link: https://lkml.kernel.org/r/20230628215310.73782-3-peterx@redhat.com
Signed-off-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: James Houghton <jthoughton@google.com>
Cc: Jason Gunthorpe <jgg@nvidia.com>
Cc: John Hubbard <jhubbard@nvidia.com>
Cc: Kirill A . Shutemov <kirill@shutemov.name>
Cc: Lorenzo Stoakes <lstoakes@gmail.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Mike Rapoport (IBM) <rppt@kernel.org>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Yang Shi <shy828301@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
mm/hugetlb.c

index 4fb396d..cc87a51 100644 (file)
@@ -6462,13 +6462,7 @@ struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma,
        struct page *page = NULL;
        spinlock_t *ptl;
        pte_t *pte, entry;
-
-       /*
-        * FOLL_PIN is not supported for follow_page(). Ordinary GUP goes via
-        * follow_hugetlb_page().
-        */
-       if (WARN_ON_ONCE(flags & FOLL_PIN))
-               return NULL;
+       int ret;
 
        hugetlb_vma_lock_read(vma);
        pte = hugetlb_walk(vma, haddr, huge_page_size(h));
@@ -6478,8 +6472,23 @@ struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma,
        ptl = huge_pte_lock(h, mm, pte);
        entry = huge_ptep_get(pte);
        if (pte_present(entry)) {
-               page = pte_page(entry) +
-                               ((address & ~huge_page_mask(h)) >> PAGE_SHIFT);
+               page = pte_page(entry);
+
+               if (!huge_pte_write(entry)) {
+                       if (flags & FOLL_WRITE) {
+                               page = NULL;
+                               goto out;
+                       }
+
+                       if (gup_must_unshare(vma, flags, page)) {
+                               /* Tell the caller to do unsharing */
+                               page = ERR_PTR(-EMLINK);
+                               goto out;
+                       }
+               }
+
+               page += ((address & ~huge_page_mask(h)) >> PAGE_SHIFT);
+
                /*
                 * Note that page may be a sub-page, and with vmemmap
                 * optimizations the page struct may be read only.
@@ -6489,8 +6498,10 @@ struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma,
                 * try_grab_page() should always be able to get the page here,
                 * because we hold the ptl lock and have verified pte_present().
                 */
-               if (try_grab_page(page, flags)) {
-                       page = NULL;
+               ret = try_grab_page(page, flags);
+
+               if (WARN_ON_ONCE(ret)) {
+                       page = ERR_PTR(ret);
                        goto out;
                }
        }