From: Bjorn Andersson Date: Thu, 18 Apr 2019 04:29:29 +0000 (-0700) Subject: arm64: mm: Ensure tail of unaligned initrd is reserved X-Git-Tag: v5.15~6556^2~1 X-Git-Url: http://review.tizen.org/git/?a=commitdiff_plain;h=d4d18e3ec6091843f607e8929a56723e28f393a6;p=platform%2Fkernel%2Flinux-starfive.git arm64: mm: Ensure tail of unaligned initrd is reserved In the event that the start address of the initrd is not aligned, but has an aligned size, the base + size will not cover the entire initrd image and there is a chance that the kernel will corrupt the tail of the image. By aligning the end of the initrd to a page boundary and then subtracting the adjusted start address the memblock reservation will cover all pages that contains the initrd. Fixes: c756c592e442 ("arm64: Utilize phys_initrd_start/phys_initrd_size") Cc: stable@vger.kernel.org Acked-by: Will Deacon Signed-off-by: Bjorn Andersson Signed-off-by: Catalin Marinas --- diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 6bc1350..7cae155 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -363,7 +363,7 @@ void __init arm64_memblock_init(void) * Otherwise, this is a no-op */ u64 base = phys_initrd_start & PAGE_MASK; - u64 size = PAGE_ALIGN(phys_initrd_size); + u64 size = PAGE_ALIGN(phys_initrd_start + phys_initrd_size) - base; /* * We can only add back the initrd memory if we don't end up