arm64/mm: define arch_get_mappable_range() 36/281536/1
authorAnshuman Khandual <anshuman.khandual@arm.com>
Fri, 26 Feb 2021 01:17:37 +0000 (17:17 -0800)
committerSeung-Woo Kim <sw0312.kim@samsung.com>
Tue, 20 Sep 2022 02:39:43 +0000 (11:39 +0900)
This overrides arch_get_mappable_range() on arm64 platform which will be
used with recently added generic framework.  It drops
inside_linear_region() and subsequent check in arch_add_memory() which are
no longer required.  It also adds a VM_BUG_ON() check that would ensure
that mhp_range_allowed() has already been called.

Link: https://lkml.kernel.org/r/1612149902-7867-3-git-send-email-anshuman.khandual@arm.com
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Heiko Carstens <hca@linux.ibm.com>
Cc: Jason Wang <jasowang@redhat.com>
Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: Pankaj Gupta <pankaj.gupta@cloud.ionos.com>
Cc: Pankaj Gupta <pankaj.gupta.linux@gmail.com>
Cc: teawater <teawaterz@linux.alibaba.com>
Cc: Vasily Gorbik <gor@linux.ibm.com>
Cc: Wei Yang <richard.weiyang@linux.alibaba.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
[port kfence feature to rpi-5.10.95]
Signed-off-by: Sung-hun Kim <sfoon.kim@samsung.com>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Seung-Woo Kim <sw0312.kim@samsung.com>
Change-Id: I6c68c5b0382419a98a3ae03045790e0078e47898

arch/arm64/mm/mmu.c

index 219f995..c68e444 100644 (file)
@@ -1456,8 +1456,9 @@ static void __remove_pgd_mapping(pgd_t *pgdir, unsigned long start, u64 size)
        free_empty_tables(start, end, PAGE_OFFSET, PAGE_END);
 }
 
-static bool inside_linear_region(u64 start, u64 size)
+struct range arch_get_mappable_range(void)
 {
+       struct range mhp_range;
        u64 start_linear_pa = __pa(_PAGE_OFFSET(vabits_actual));
        u64 end_linear_pa = __pa(PAGE_END - 1);
 
@@ -1481,7 +1482,10 @@ static bool inside_linear_region(u64 start, u64 size)
         * range which can be mapped inside this linear mapping range, must
         * also be derived from its end points.
         */
-       return start >= start_linear_pa && (start + size - 1) <= end_linear_pa;
+       mhp_range.start = start_linear_pa;
+       mhp_range.end = end_linear_pa;
+
+       return mhp_range;
 }
 
 int arch_add_memory(int nid, u64 start, u64 size,
@@ -1489,11 +1493,7 @@ int arch_add_memory(int nid, u64 start, u64 size,
 {
        int ret, flags = 0;
 
-       if (!inside_linear_region(start, size)) {
-               pr_err("[%llx %llx] is outside linear mapping region\n", start, start + size);
-               return -EINVAL;
-       }
-
+       VM_BUG_ON(!mhp_range_allowed(start, size, true));
        if (rodata_full || debug_pagealloc_enabled())
                flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;