x86/mm: Use PAGE_ALIGNED(x) instead of IS_ALIGNED(x, PAGE_SIZE)
authorFanjun Kong <bh1scw@gmail.com>
Thu, 26 May 2022 14:20:39 +0000 (22:20 +0800)
committerIngo Molnar <mingo@kernel.org>
Fri, 27 May 2022 10:19:56 +0000 (12:19 +0200)
The <linux/mm.h> already provides the PAGE_ALIGNED() macro. Let's
use this macro instead of IS_ALIGNED() and passing PAGE_SIZE directly.

No change in functionality.

[ mingo: Tweak changelog. ]

Signed-off-by: Fanjun Kong <bh1scw@gmail.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/20220526142038.1582839-1-bh1scw@gmail.com
arch/x86/mm/init_64.c

index 61d0ab1..8779d6b 100644 (file)
@@ -1240,8 +1240,8 @@ remove_pagetable(unsigned long start, unsigned long end, bool direct,
 void __ref vmemmap_free(unsigned long start, unsigned long end,
                struct vmem_altmap *altmap)
 {
-       VM_BUG_ON(!IS_ALIGNED(start, PAGE_SIZE));
-       VM_BUG_ON(!IS_ALIGNED(end, PAGE_SIZE));
+       VM_BUG_ON(!PAGE_ALIGNED(start));
+       VM_BUG_ON(!PAGE_ALIGNED(end));
 
        remove_pagetable(start, end, false, altmap);
 }
@@ -1605,8 +1605,8 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node,
 {
        int err;
 
-       VM_BUG_ON(!IS_ALIGNED(start, PAGE_SIZE));
-       VM_BUG_ON(!IS_ALIGNED(end, PAGE_SIZE));
+       VM_BUG_ON(!PAGE_ALIGNED(start));
+       VM_BUG_ON(!PAGE_ALIGNED(end));
 
        if (end - start < PAGES_PER_SECTION * sizeof(struct page))
                err = vmemmap_populate_basepages(start, end, node, NULL);