mm/memremap: fix memunmap_pages() race with get_dev_pagemap()
authorMiaohe Lin <linmiaohe@huawei.com>
Thu, 9 Jun 2022 12:13:05 +0000 (20:13 +0800)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Wed, 17 Aug 2022 12:23:44 +0000 (14:23 +0200)
[ Upstream commit 1e57ffb6e3fd9583268c6462c4e3853575b21701 ]

Think about the below scene:

 CPU1 CPU2
 memunmap_pages
   percpu_ref_exit
     __percpu_ref_exit
       free_percpu(percpu_count);
         /* percpu_count is freed here! */
 get_dev_pagemap
   xa_load(&pgmap_array, PHYS_PFN(phys))
     /* pgmap still in the pgmap_array */
   percpu_ref_tryget_live(&pgmap->ref)
     if __ref_is_percpu
       /* __PERCPU_REF_ATOMIC_DEAD not set yet */
       this_cpu_inc(*percpu_count)
         /* access freed percpu_count here! */
      ref->percpu_count_ptr = __PERCPU_REF_ATOMIC_DEAD;
        /* too late... */
   pageunmap_range

To fix the issue, do percpu_ref_exit() after pgmap_array is emptied. So
we won't do percpu_ref_tryget_live() against a being freed percpu_ref.

Link: https://lkml.kernel.org/r/20220609121305.2508-1-linmiaohe@huawei.com
Fixes: b7b3c01b1915 ("mm/memremap_pages: support multiple ranges per invocation")
Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
mm/memremap.c

index a638a27d89f5e881c5886519a1a6627d4feb5804..8d743cbc2964237d2cd2489706a043da26f3b355 100644 (file)
@@ -148,10 +148,10 @@ void memunmap_pages(struct dev_pagemap *pgmap)
                for_each_device_pfn(pfn, pgmap, i)
                        put_page(pfn_to_page(pfn));
        wait_for_completion(&pgmap->done);
-       percpu_ref_exit(&pgmap->ref);
 
        for (i = 0; i < pgmap->nr_range; i++)
                pageunmap_range(pgmap, i);
+       percpu_ref_exit(&pgmap->ref);
 
        WARN_ONCE(pgmap->altmap.alloc, "failed to free all reserved pages\n");
        devmap_managed_enable_put(pgmap);