x86/kasan: Map shadow for percpu pages on demand
authorAndrey Ryabinin <ryabinin.a.a@gmail.com>
Thu, 27 Oct 2022 21:31:04 +0000 (00:31 +0300)
committerDave Hansen <dave.hansen@linux.intel.com>
Thu, 15 Dec 2022 18:37:26 +0000 (10:37 -0800)
commit3f148f3318140035e87decc1214795ff0755757b
tree97abaf0e5e5aa0fa4b5bf651a2d6a72041956d7e
parent30a0b95b1335e12efef89dd78518ed3e4a71a763
x86/kasan: Map shadow for percpu pages on demand

KASAN maps shadow for the entire CPU-entry-area:
  [CPU_ENTRY_AREA_BASE, CPU_ENTRY_AREA_BASE + CPU_ENTRY_AREA_MAP_SIZE]

This will explode once the per-cpu entry areas are randomized since it
will increase CPU_ENTRY_AREA_MAP_SIZE to 512 GB and KASAN fails to
allocate shadow for such big area.

Fix this by allocating KASAN shadow only for really used cpu entry area
addresses mapped by cea_map_percpu_pages()

Thanks to the 0day folks for finding and reporting this to be an issue.

[ dhansen: tweak changelog since this will get committed before peterz's
   actual cpu-entry-area randomization ]

Signed-off-by: Andrey Ryabinin <ryabinin.a.a@gmail.com>
Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Tested-by: Yujie Liu <yujie.liu@intel.com>
Cc: kernel test robot <yujie.liu@intel.com>
Link: https://lore.kernel.org/r/202210241508.2e203c3d-yujie.liu@intel.com
arch/x86/include/asm/kasan.h
arch/x86/mm/cpu_entry_area.c
arch/x86/mm/kasan_init_64.c