thp: close race between split and zap huge pages
authorKirill A. Shutemov <kirill.shutemov@linux.intel.com>
Fri, 18 Apr 2014 22:07:25 +0000 (15:07 -0700)
committerJiri Slaby <jslaby@suse.cz>
Tue, 13 Jan 2015 07:47:49 +0000 (08:47 +0100)
commit b5a8cad376eebbd8598642697e92a27983aee802 upstream.

[stable 3.12 note]
This commit was supposed to fix a completely other issue. But in 3.12,
with commit f72e7dcdd25229446b102e587ef2f826f76bff28 (mm: let
mm_find_pmd fix buggy race with THP fault), we need this commit as
well (it fixes the issue as a by-product). Hugh Dickins writes:
<== citation starts here>
Fine for this to go in, but there is one catch, which I discovered
when backporting to v3.11: it needed one more hunk.  I haven't checked
your base tree, but if this applies then I believe you need it - most
of the time no problem, but it can case page migration to fail to find
a migration entry it inserted earlier, then BUG_ON(!PageLocked(p)) in
migration_entry_to_page() soon after.  Here's what I wrote back then:

Note on rebase to v3.11: added a hunk to replace the use of
mm_find_pmd() in page_check_address_pmd().  This call had been
similarly replaced by the time of my v3.16 commit, in Kirill
Shutemov's v3.15 b5a8cad376ee ("thp: close race between split and zap
huge pages"): which we do not need as such, since it's fixing v3.13
117b0791ac42 ("mm, thp: move ptl taking inside
page_check_address_pmd()"), from a split page-table-lock series we are
not backporting.  But without this additional hunk, rmap sometimes
broke when the new semantic for mm_find_pmd() was used here.
<== end of citation>

But instead of appending hunks to commits, I am taking a full,
backported version of commit b5a8cad376ee with this note prepended.

So the changelog of b5a8cad376ee is left below, but does not apply to
3.12 yet.
[=== stable 3.12 note ends here]

Sasha Levin has reported two THP BUGs[1][2].  I believe both of them
have the same root cause.  Let's look to them one by one.

The first bug[1] is "kernel BUG at mm/huge_memory.c:1829!".  It's
BUG_ON(mapcount != page_mapcount(page)) in __split_huge_page().  From my
testing I see that page_mapcount() is higher than mapcount here.

I think it happens due to race between zap_huge_pmd() and
page_check_address_pmd().  page_check_address_pmd() misses PMD which is
under zap:

CPU0 CPU1
zap_huge_pmd()
  pmdp_get_and_clear()
__split_huge_page()
  anon_vma_interval_tree_foreach()
    __split_huge_page_splitting()
      page_check_address_pmd()
        mm_find_pmd()
  /*
   * We check if PMD present without taking ptl: no
   * serialization against zap_huge_pmd(). We miss this PMD,
   * it's not accounted to 'mapcount' in __split_huge_page().
   */
  pmd_present(pmd) == 0

  BUG_ON(mapcount != page_mapcount(page)) // CRASH!!!

  page_remove_rmap(page)
    atomic_add_negative(-1, &page->_mapcount)

The second bug[2] is "kernel BUG at mm/huge_memory.c:1371!".
It's VM_BUG_ON_PAGE(!PageHead(page), page) in zap_huge_pmd().

This happens in similar way:

CPU0 CPU1
zap_huge_pmd()
  pmdp_get_and_clear()
  page_remove_rmap(page)
    atomic_add_negative(-1, &page->_mapcount)
__split_huge_page()
  anon_vma_interval_tree_foreach()
    __split_huge_page_splitting()
      page_check_address_pmd()
        mm_find_pmd()
  pmd_present(pmd) == 0 /* The same comment as above */
  /*
   * No crash this time since we already decremented page->_mapcount in
   * zap_huge_pmd().
   */
  BUG_ON(mapcount != page_mapcount(page))

  /*
   * We split the compound page here into small pages without
   * serialization against zap_huge_pmd()
   */
  __split_huge_page_refcount()
VM_BUG_ON_PAGE(!PageHead(page), page); // CRASH!!!

So my understanding the problem is pmd_present() check in mm_find_pmd()
without taking page table lock.

The bug was introduced by me commit with commit 117b0791ac42. Sorry for
that. :(

Let's open code mm_find_pmd() in page_check_address_pmd() and do the
check under page table lock.

Note that __page_check_address() does the same for PTE entires
if sync != 0.

I've stress tested split and zap code paths for 36+ hours by now and
don't see crashes with the patch applied. Before it took <20 min to
trigger the first bug and few hours for second one (if we ignore
first).

[1] https://lkml.kernel.org/g/<53440991.9090001@oracle.com>
[2] https://lkml.kernel.org/g/<5310C56C.60709@oracle.com>

Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Reported-by: Sasha Levin <sasha.levin@oracle.com>
Tested-by: Sasha Levin <sasha.levin@oracle.com>
Cc: Bob Liu <lliubbo@gmail.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Michel Lespinasse <walken@google.com>
Cc: Dave Jones <davej@redhat.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Jiri Slaby <jslaby@suse.cz>
mm/huge_memory.c

index 04d17ba008933f72093c32579e377dc1873689d2..04535b64119c2d331b9a029bc1076d711667cd13 100644 (file)
@@ -1541,15 +1541,22 @@ pmd_t *page_check_address_pmd(struct page *page,
                              unsigned long address,
                              enum page_check_address_pmd_flag flag)
 {
+       pgd_t *pgd;
+       pud_t *pud;
        pmd_t *pmd, *ret = NULL;
 
        if (address & ~HPAGE_PMD_MASK)
                goto out;
 
-       pmd = mm_find_pmd(mm, address);
-       if (!pmd)
+       pgd = pgd_offset(mm, address);
+       if (!pgd_present(*pgd))
                goto out;
-       if (pmd_none(*pmd))
+       pud = pud_offset(pgd, address);
+       if (!pud_present(*pud))
+               goto out;
+       pmd = pmd_offset(pud, address);
+
+       if (!pmd_present(*pmd))
                goto out;
        if (pmd_page(*pmd) != page)
                goto out;