mm/zsmalloc.c: close race window between zs_pool_dec_isolated() and zs_unregister_mig...
authorMiaohe Lin <linmiaohe@huawei.com>
Fri, 5 Nov 2021 20:45:03 +0000 (13:45 -0700)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Thu, 18 Nov 2021 13:04:26 +0000 (14:04 +0100)
[ Upstream commit afe8605ca45424629fdddfd85984b442c763dc47 ]

There is one possible race window between zs_pool_dec_isolated() and
zs_unregister_migration() because wait_for_isolated_drain() checks the
isolated count without holding class->lock and there is no order inside
zs_pool_dec_isolated().  Thus the below race window could be possible:

  zs_pool_dec_isolated zs_unregister_migration
    check pool->destroying != 0
  pool->destroying = true;
  smp_mb();
  wait_for_isolated_drain()
    wait for pool->isolated_pages == 0
    atomic_long_dec(&pool->isolated_pages);
    atomic_long_read(&pool->isolated_pages) == 0

Since we observe the pool->destroying (false) before atomic_long_dec()
for pool->isolated_pages, waking pool->migration_wait up is missed.

Fix this by ensure checking pool->destroying happens after the
atomic_long_dec(&pool->isolated_pages).

Link: https://lkml.kernel.org/r/20210708115027.7557-1-linmiaohe@huawei.com
Fixes: 701d678599d0 ("mm/zsmalloc.c: fix race condition in zs_destroy_pool")
Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Sergey Senozhatsky <senozhatsky@chromium.org>
Cc: Henry Burns <henryburns@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
mm/zsmalloc.c

index 7a0b79b0a689967102663cdd9759b970e22bc24e..73cd50735df29a341bb62316ec3bc9d6f8af064b 100644 (file)
@@ -1835,10 +1835,11 @@ static inline void zs_pool_dec_isolated(struct zs_pool *pool)
        VM_BUG_ON(atomic_long_read(&pool->isolated_pages) <= 0);
        atomic_long_dec(&pool->isolated_pages);
        /*
-        * There's no possibility of racing, since wait_for_isolated_drain()
-        * checks the isolated count under &class->lock after enqueuing
-        * on migration_wait.
+        * Checking pool->destroying must happen after atomic_long_dec()
+        * for pool->isolated_pages above. Paired with the smp_mb() in
+        * zs_unregister_migration().
         */
+       smp_mb__after_atomic();
        if (atomic_long_read(&pool->isolated_pages) == 0 && pool->destroying)
                wake_up_all(&pool->migration_wait);
 }