mm/hugetlb: use helper huge_page_order and pages_per_huge_page
authorMiaohe Lin <linmiaohe@huawei.com>
Wed, 24 Feb 2021 20:07:01 +0000 (12:07 -0800)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Wed, 14 Jul 2021 14:56:51 +0000 (16:56 +0200)
[ Upstream commit c78a7f3639932c48b4e1d329fc80fd26aa1a2fa3 ]

Since commit a5516438959d ("hugetlb: modular state for hugetlb page
size"), we can use huge_page_order to access hstate->order and
pages_per_huge_page to fetch the pages per huge page.  But
gather_bootmem_prealloc() forgot to use it.

Link: https://lkml.kernel.org/r/20210114114435.40075-1-linmiaohe@huawei.com
Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
mm/hugetlb.c

index d4f89c2..991b5cd 100644 (file)
@@ -2500,7 +2500,7 @@ static void __init gather_bootmem_prealloc(void)
                struct hstate *h = m->hstate;
 
                WARN_ON(page_count(page) != 1);
-               prep_compound_huge_page(page, h->order);
+               prep_compound_huge_page(page, huge_page_order(h));
                WARN_ON(PageReserved(page));
                prep_new_huge_page(h, page, page_to_nid(page));
                put_page(page); /* free it into the hugepage allocator */
@@ -2512,7 +2512,7 @@ static void __init gather_bootmem_prealloc(void)
                 * side-effects, like CommitLimit going negative.
                 */
                if (hstate_is_gigantic(h))
-                       adjust_managed_page_count(page, 1 << h->order);
+                       adjust_managed_page_count(page, pages_per_huge_page(h));
                cond_resched();
        }
 }