powerpc/powernv: IMC fix out of bounds memory access at shutdown
authorNicholas Piggin <npiggin@gmail.com>
Tue, 13 Feb 2018 07:45:11 +0000 (17:45 +1000)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Thu, 26 Apr 2018 09:02:20 +0000 (11:02 +0200)
[ Upstream commit e7bde88cdb4f0e432398a7d29ca2a15d2c18952a ]

The OPAL IMC driver's shutdown handler disables nest PMU counters by
walking nodes and taking the first CPU out of their cpumask, which is
used to index into the paca (get_hard_smp_processor_id()). This does
not always do the right thing, and in particular for CPU-less nodes it
returns NR_CPUS and that overruns the paca and dereferences random
memory.

Fix it by being more careful about checking returned CPU, and only
using online CPUs. It's not clear this shutdown code makes sense after
commit 885dcd709b ("powerpc/perf: Add nest IMC PMU support"), but this
should not make things worse

Currently the bug causes us to call OPAL with a junk CPU number. A
separate patch in development to change the way pacas are allocated
escalates this bug into a crash:

  Unable to handle kernel paging request for data at address 0x2a21af1eeb000076
  Faulting instruction address: 0xc0000000000a5468
  Oops: Kernel access of bad area, sig: 11 [#1]
  ...
  NIP opal_imc_counters_shutdown+0x148/0x1d0
  LR  opal_imc_counters_shutdown+0x134/0x1d0
  Call Trace:
   opal_imc_counters_shutdown+0x134/0x1d0 (unreliable)
   platform_drv_shutdown+0x44/0x60
   device_shutdown+0x1f8/0x350
   kernel_restart_prepare+0x54/0x70
   kernel_restart+0x28/0xc0
   SyS_reboot+0x1d0/0x2c0
   system_call+0x58/0x6c

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
arch/powerpc/platforms/powernv/opal-imc.c

index b150f4d..6914b28 100644 (file)
@@ -126,9 +126,11 @@ static void disable_nest_pmu_counters(void)
        const struct cpumask *l_cpumask;
 
        get_online_cpus();
-       for_each_online_node(nid) {
+       for_each_node_with_cpus(nid) {
                l_cpumask = cpumask_of_node(nid);
-               cpu = cpumask_first(l_cpumask);
+               cpu = cpumask_first_and(l_cpumask, cpu_online_mask);
+               if (cpu >= nr_cpu_ids)
+                       continue;
                opal_imc_counters_stop(OPAL_IMC_COUNTERS_NEST,
                                       get_hard_smp_processor_id(cpu));
        }