mm: pagewalk: Take the pagetable lock in walk_pte_range()
authorThomas Hellstrom <thellstrom@vmware.com>
Fri, 4 Oct 2019 09:04:43 +0000 (11:04 +0200)
committerThomas Hellstrom <thellstrom@vmware.com>
Wed, 6 Nov 2019 12:02:25 +0000 (13:02 +0100)
Without the lock, anybody modifying a pte from within this function might
have it concurrently modified by someone else.

Cc: Matthew Wilcox <willy@infradead.org>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@surriel.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Huang Ying <ying.huang@intel.com>
Cc: Jérôme Glisse <jglisse@redhat.com>
Cc: Kirill A. Shutemov <kirill@shutemov.name>
Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
mm/pagewalk.c

index d48c2a9..c5fa42c 100644 (file)
@@ -10,8 +10,9 @@ static int walk_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
        pte_t *pte;
        int err = 0;
        const struct mm_walk_ops *ops = walk->ops;
+       spinlock_t *ptl;
 
-       pte = pte_offset_map(pmd, addr);
+       pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl);
        for (;;) {
                err = ops->pte_entry(pte, addr, addr + PAGE_SIZE, walk);
                if (err)
@@ -22,7 +23,7 @@ static int walk_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
                pte++;
        }
 
-       pte_unmap(pte);
+       pte_unmap_unlock(pte, ptl);
        return err;
 }