mm/huge_memory: prevent THP_ZERO_PAGE_ALLOC increased twice
authorLiu Shixin <liushixin2@huawei.com>
Fri, 9 Sep 2022 02:16:53 +0000 (10:16 +0800)
committerAndrew Morton <akpm@linux-foundation.org>
Mon, 3 Oct 2022 21:03:08 +0000 (14:03 -0700)
A user who reads THP_ZERO_PAGE_ALLOC may be more concerned about the huge
zero pages that are really allocated for thp.  It is misleading to
increase THP_ZERO_PAGE_ALLOC twice if two threads call get_huge_zero_page
concurrently.  Don't increase the value if the huge page is not really
used.

Update Documentation/admin-guide/mm/transhuge.rst to suit.

Link: https://lkml.kernel.org/r/20220909021653.3371879-1-liushixin2@huawei.com
Signed-off-by: Liu Shixin <liushixin2@huawei.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Kefeng Wang <wangkefeng.wang@huawei.com>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Documentation/admin-guide/mm/transhuge.rst
mm/huge_memory.c

index c9c37f1..8e3418e 100644 (file)
@@ -366,10 +366,9 @@ thp_split_pmd
        page table entry.
 
 thp_zero_page_alloc
-       is incremented every time a huge zero page is
-       successfully allocated. It includes allocations which where
-       dropped due race with other allocation. Note, it doesn't count
-       every map of the huge zero page, only its allocation.
+       is incremented every time a huge zero page used for thp is
+       successfully allocated. Note, it doesn't count every map of
+       the huge zero page, only its allocation.
 
 thp_zero_page_alloc_failed
        is incremented if kernel fails to allocate
index 36ef79b..4938def 100644 (file)
@@ -163,7 +163,6 @@ retry:
                count_vm_event(THP_ZERO_PAGE_ALLOC_FAILED);
                return false;
        }
-       count_vm_event(THP_ZERO_PAGE_ALLOC);
        preempt_disable();
        if (cmpxchg(&huge_zero_page, NULL, zero_page)) {
                preempt_enable();
@@ -175,6 +174,7 @@ retry:
        /* We take additional reference here. It will be put back by shrinker */
        atomic_set(&huge_zero_refcount, 2);
        preempt_enable();
+       count_vm_event(THP_ZERO_PAGE_ALLOC);
        return true;
 }