ARM, xtensa: highmem: avoid clobbering non-page aligned memory reservations
authorArd Biesheuvel <ardb@kernel.org>
Sat, 31 Oct 2020 09:43:45 +0000 (11:43 +0200)
committerMike Rapoport <rppt@linux.ibm.com>
Wed, 4 Nov 2020 08:42:57 +0000 (10:42 +0200)
commitb9bc36704cca500e2b41be4c5bf615c1d7ddc3ce
tree16687fb5437c2cc2d7bcd6ea5d41d525a6817b17
parent4ef8451b332662d004df269d4cdeb7d9f31419b5
ARM, xtensa: highmem: avoid clobbering non-page aligned memory reservations

free_highpages() iterates over the free memblock regions in high
memory, and marks each page as available for the memory management
system.

Until commit cddb5ddf2b76 ("arm, xtensa: simplify initialization of
high memory pages") it rounded beginning of each region upwards and end of
each region downwards.

However, after that commit free_highmem() rounds the beginning and end of
each region downwards, and we may end up freeing a page that is
memblock_reserve()d, resulting in memory corruption.

Restore the original rounding of the region boundaries to avoid freeing
reserved pages.

Fixes: cddb5ddf2b76 ("arm, xtensa: simplify initialization of high memory pages")
Link: https://lore.kernel.org/r/20201029110334.4118-1-ardb@kernel.org/
Link: https://lore.kernel.org/r/20201031094345.6984-1-rppt@kernel.org
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Co-developed-by: Mike Rapoport <rppt@linux.ibm.com>
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Acked-by: Max Filippov <jcmvbkbc@gmail.com>
arch/arm/mm/init.c
arch/xtensa/mm/init.c