mm: kmsan: use helper function page_size()
authorZhangPeng <zhangpeng362@huawei.com>
Thu, 27 Jul 2023 01:16:10 +0000 (09:16 +0800)
committerAndrew Morton <akpm@linux-foundation.org>
Mon, 21 Aug 2023 20:37:29 +0000 (13:37 -0700)
Patch series "minor cleanups for kmsan".

Use helper function and macros to improve code readability.  No functional
modification involved.

This patch (of 3):

Use function page_size() to improve code readability.  No functional
modification involved.

Link: https://lkml.kernel.org/r/20230727011612.2721843-1-zhangpeng362@huawei.com
Link: https://lkml.kernel.org/r/20230727011612.2721843-2-zhangpeng362@huawei.com
Signed-off-by: ZhangPeng <zhangpeng362@huawei.com>
Reviewed-by: Alexander Potapenko <glider@google.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Kefeng Wang <wangkefeng.wang@huawei.com>
Cc: Marco Elver <elver@google.com>
Cc: Nanyong Sun <sunnanyong@huawei.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
mm/kmsan/hooks.c
mm/kmsan/shadow.c

index ec0da72..4e3c3e6 100644 (file)
@@ -117,7 +117,7 @@ void kmsan_kfree_large(const void *ptr)
        page = virt_to_head_page((void *)ptr);
        KMSAN_WARN_ON(ptr != page_address(page));
        kmsan_internal_poison_memory((void *)ptr,
-                                    PAGE_SIZE << compound_order(page),
+                                    page_size(page),
                                     GFP_KERNEL,
                                     KMSAN_POISON_CHECK | KMSAN_POISON_FREE);
        kmsan_leave_runtime();
index b8bb95e..c7de991 100644 (file)
@@ -210,7 +210,7 @@ void kmsan_free_page(struct page *page, unsigned int order)
                return;
        kmsan_enter_runtime();
        kmsan_internal_poison_memory(page_address(page),
-                                    PAGE_SIZE << compound_order(page),
+                                    page_size(page),
                                     GFP_KERNEL,
                                     KMSAN_POISON_CHECK | KMSAN_POISON_FREE);
        kmsan_leave_runtime();