mm: numa: do not clear PMD during PTE update scan
authorMel Gorman <mgorman@suse.de>
Tue, 7 Jan 2014 14:00:39 +0000 (14:00 +0000)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Thu, 9 Jan 2014 20:25:13 +0000 (12:25 -0800)
commit 5a6dac3ec5f583cc8ee7bc53b5500a207c4ca433 upstream.

If the PMD is flushed then a parallel fault in handle_mm_fault() will
enter the pmd_none and do_huge_pmd_anonymous_page() path where it'll
attempt to insert a huge zero page.  This is wasteful so the patch
avoids clearing the PMD when setting pmd_numa.

Signed-off-by: Mel Gorman <mgorman@suse.de>
Reviewed-by: Rik van Riel <riel@redhat.com>
Cc: Alex Thorlton <athorlton@sgi.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
mm/huge_memory.c

index 90cd2c3..5003349 100644 (file)
@@ -1474,20 +1474,22 @@ int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
 
        if (__pmd_trans_huge_lock(pmd, vma) == 1) {
                pmd_t entry;
-               entry = pmdp_get_and_clear(mm, addr, pmd);
                if (!prot_numa) {
+                       entry = pmdp_get_and_clear(mm, addr, pmd);
                        entry = pmd_modify(entry, newprot);
                        BUG_ON(pmd_write(entry));
+                       set_pmd_at(mm, addr, pmd, entry);
                } else {
                        struct page *page = pmd_page(*pmd);
+                       entry = *pmd;
 
                        /* only check non-shared pages */
                        if (page_mapcount(page) == 1 &&
                            !pmd_numa(*pmd)) {
                                entry = pmd_mknuma(entry);
+                               set_pmd_at(mm, addr, pmd, entry);
                        }
                }
-               set_pmd_at(mm, addr, pmd, entry);
                spin_unlock(&vma->vm_mm->page_table_lock);
                ret = 1;
        }