mm: fix failure to unmap pte on highmem systems
authorRyan Roberts <ryan.roberts@arm.com>
Fri, 2 Jun 2023 09:29:49 +0000 (10:29 +0100)
committerAndrew Morton <akpm@linux-foundation.org>
Fri, 9 Jun 2023 23:25:55 +0000 (16:25 -0700)
The loser of a race to service a pte for a device private entry in the
swap path previously unlocked the ptl, but failed to unmap the pte.  This
only affects highmem systems since unmapping a pte is a noop on
non-highmem systems.

Link: https://lkml.kernel.org/r/20230602092949.545577-5-ryan.roberts@arm.com
Fixes: 16ce101db85d ("mm/memory.c: fix race when faulting a device private page")
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
Reviewed-by: Zi Yan <ziy@nvidia.com>
Reviewed-by: Mike Rapoport (IBM) <rppt@kernel.org>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Lorenzo Stoakes <lstoakes@gmail.com>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: SeongJae Park <sj@kernel.org>
Cc: Uladzislau Rezki (Sony) <urezki@gmail.com>
Cc: Yu Zhao <yuzhao@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
mm/memory.c

index 4dd09f9..36082fd 100644 (file)
@@ -3728,10 +3728,8 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
                        vmf->page = pfn_swap_entry_to_page(entry);
                        vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd,
                                        vmf->address, &vmf->ptl);
-                       if (unlikely(!pte_same(*vmf->pte, vmf->orig_pte))) {
-                               spin_unlock(vmf->ptl);
-                               goto out;
-                       }
+                       if (unlikely(!pte_same(*vmf->pte, vmf->orig_pte)))
+                               goto unlock;
 
                        /*
                         * Get a page reference while we know the page can't be