Nishad Kamdar [Tue, 16 Apr 2019 15:28:57 +0000 (20:58 +0530)]
powerpc: Use the correct style for SPDX License Identifier
This patch corrects the SPDX License Identifier style
in the powerpc Hardware Architecture related files.
Suggested-by: Joe Perches <joe@perches.com>
Signed-off-by: Nishad Kamdar <nishadkamdar@gmail.com>
Acked-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Stewart Smith [Tue, 28 May 2019 03:29:25 +0000 (13:29 +1000)]
powerpc/powernv-eeh: Consisely desribe what this file does
If the previous comment made sense, continue debugging or call your
doctor immediately.
Signed-off-by: Stewart Smith <stewart@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Krzysztof Kozlowski [Tue, 4 Jun 2019 08:00:33 +0000 (10:00 +0200)]
powerpc/configs: Remove useless UEVENT_HELPER_PATH
Remove the CONFIG_UEVENT_HELPER_PATH because:
1. It is disabled since commit
1be01d4a5714 ("driver: base: Disable
CONFIG_UEVENT_HELPER by default") as its dependency (UEVENT_HELPER) was
made default to 'n',
2. It is not recommended (help message: "This should not be used today
[...] creates a high system load") and was kept only for ancient
userland,
3. Certain userland specifically requests it to be disabled (systemd
README: "Legacy hotplug slows down the system and confuses udev").
Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org>
Acked-by: Geert Uytterhoeven <geert+renesas@glider.be>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christian Lamparter [Sat, 15 Jun 2019 15:23:13 +0000 (17:23 +0200)]
powerpc/4xx/uic: clear pending interrupt after irq type/pol change
When testing out gpio-keys with a button, a spurious
interrupt (and therefore a key press or release event)
gets triggered as soon as the driver enables the irq
line for the first time.
This patch clears any potential bogus generated interrupt
that was caused by the switching of the associated irq's
type and polarity.
Signed-off-by: Christian Lamparter <chunkeey@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Geert Uytterhoeven [Mon, 17 Jun 2019 14:52:04 +0000 (16:52 +0200)]
selftests/powerpc: Add missing newline at end of file
"git diff" says:
\ No newline at end of file
after modifying the file.
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
[mpe: Rebase since addition of another test]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Suraj Jitindar Singh [Wed, 6 Mar 2019 01:10:38 +0000 (12:10 +1100)]
powerpc: Add barrier_nospec to raw_copy_in_user()
Commit
ddf35cf3764b ("powerpc: Use barrier_nospec in copy_from_user()")
Added barrier_nospec before loading from user-controlled pointers. The
intention was to order the load from the potentially user-controlled
pointer vs a previous branch based on an access_ok() check or similar.
In order to achieve the same result, add a barrier_nospec to the
raw_copy_in_user() function before loading from such a user-controlled
pointer.
Fixes:
ddf35cf3764b ("powerpc: Use barrier_nospec in copy_from_user()")
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Neuling [Thu, 20 Jun 2019 06:00:40 +0000 (16:00 +1000)]
KVM: PPC: Book3S HV: Fix CR0 setting in TM emulation
When emulating tsr, treclaim and trechkpt, we incorrectly set CR0. The
code currently sets:
CR0 <- 00 || MSR[TS]
but according to the ISA it should be:
CR0 <- 0 || MSR[TS] || 0
This fixes the bit shift to put the bits in the correct location.
This is a data integrity issue as CR0 is corrupted.
Fixes:
4bb3c7a0208f ("KVM: PPC: Book3S HV: Work around transactional memory bugs in POWER9")
Cc: stable@vger.kernel.org # v4.17+
Tested-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Alexey Kardashevskiy [Fri, 28 Jun 2019 06:53:00 +0000 (16:53 +1000)]
powerpc/powernv: Fix stale iommu table base after VFIO
The powernv platform uses @dma_iommu_ops for non-bypass DMA. These ops
need an iommu_table pointer which is stored in
dev->archdata.iommu_table_base. It is initialized during
pcibios_setup_device() which handles boot time devices. However when a
device is taken from the system in order to pass it through, the
default IOMMU table is destroyed but the pointer in a device is not
updated; also when a device is returned back to the system, a new
table pointer is not stored in dev->archdata.iommu_table_base either.
So when a just returned device tries using IOMMU, it crashes on
accessing stale iommu_table or its members.
This calls set_iommu_table_base() when the default window is created.
Note it used to be there before but was wrongly removed (see "fixes").
It did not appear before as these days most devices simply use bypass.
This adds set_iommu_table_base(NULL) when a device is taken from the
system to make it clear that IOMMU DMA cannot be used past that point.
Fixes:
c4e9d3c1e65a ("powerpc/powernv/pseries: Rework device adding to IOMMU groups")
Cc: stable@vger.kernel.org # v5.0+
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Alexey Kardashevskiy [Wed, 26 Jun 2019 02:37:46 +0000 (12:37 +1000)]
powerpc/pci/of: Parse unassigned resources
The pseries platform uses the PCI_PROBE_DEVTREE method of PCI probing
which reads "assigned-addresses" of every PCI device and initializes
the device resources. However if the property is missing or zero sized,
then there is no fallback of any kind and the PCI resources remain
undiscovered, i.e. pdev->resource[] array remains empty.
This adds a fallback which parses the "reg" property in pretty much same
way except it marks resources as "unset" which later make Linux assign
those resources proper addresses.
This has an effect when:
1. a hypervisor failed to assign any resource for a device;
2. /chosen/linux,pci-probe-only=0 is in the DT so the system may try
assigning a resource.
Neither is likely to happen under PowerVM.
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Alexey Kardashevskiy [Tue, 7 May 2019 06:25:59 +0000 (16:25 +1000)]
powerpc/pseries/dma: Enable SWIOTLB
So far the pseries platforms has always been using IOMMU making
SWIOTLB unnecessary. Now we want secure guests which means devices can
only access certain areas of guest physical memory; we are going to
use SWIOTLB for this purpose.
This allows SWIOTLB for pseries. By default there is no change in
behavior.
This enables SWIOTLB when the "swiotlb" kernel parameter is set to
"force".
With the SWIOTLB enabled, the kernel creates a directly mapped DMA
window (using the usual DDW mechanism) and implements SWIOTLB on top
of that.
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Alexey Kardashevskiy [Tue, 7 May 2019 06:25:58 +0000 (16:25 +1000)]
powerpc/pseries/dma: Allow SWIOTLB
The commit
8617a5c5bc00 ("powerpc/dma: handle iommu bypass in
dma_iommu_ops") merged direct DMA ops into the IOMMU DMA ops allowing
SWIOTLB as well but only for mapping; the unmapping and bouncing parts
were left unmodified.
This adds missing direct unmapping calls to .unmap_page() and
.unmap_sg().
This adds missing sync callbacks and directs them to the direct DMA
hooks.
Fixes:
8617a5c5bc00 ("powerpc/dma: handle iommu bypass in dma_iommu_ops")
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Thiago Jung Bauermann <bauerman@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christoph Hellwig [Sat, 29 Jun 2019 08:03:59 +0000 (10:03 +0200)]
powerpc: remove device_to_mask()
Use the dma_get_mask() helper from dma-mapping.h instead, as they are
functionally identical.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Neuling [Tue, 4 Jun 2019 03:00:37 +0000 (13:00 +1000)]
powerpc: Fix compile issue with force DAWR
If you compile with KVM but without CONFIG_HAVE_HW_BREAKPOINT you fail
at linking with:
arch/powerpc/kvm/book3s_hv_rmhandlers.o:(.text+0x708): undefined reference to `dawr_force_enable'
This was caused by commit
c1fe190c0672 ("powerpc: Add force enable of
DAWR on P9 option").
This moves a bunch of code around to fix this. It moves a lot of the
DAWR code in a new file and creates a new CONFIG_PPC_DAWR to enable
compiling it.
Fixes:
c1fe190c0672 ("powerpc: Add force enable of DAWR on P9 option")
Signed-off-by: Michael Neuling <mikey@neuling.org>
[mpe: Minor formatting in set_dawr()]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Mathieu Malaterre [Tue, 4 Jun 2019 03:00:36 +0000 (13:00 +1000)]
powerpc: silence a -Wcast-function-type warning in dawr_write_file_bool
In commit
c1fe190c0672 ("powerpc: Add force enable of DAWR on P9
option") the following piece of code was added:
smp_call_function((smp_call_func_t)set_dawr, &null_brk, 0);
Since GCC 8 this triggers the following warning about incompatible
function types:
arch/powerpc/kernel/hw_breakpoint.c:408:21: error: cast between incompatible function types from 'int (*)(struct arch_hw_breakpoint *)' to 'void (*)(void *)' [-Werror=cast-function-type]
Since the warning is there for a reason, and should not be hidden behind
a cast, provide an intermediate callback function to avoid the warning.
Fixes:
c1fe190c0672 ("powerpc: Add force enable of DAWR on P9 option")
Suggested-by: Christoph Hellwig <hch@infradead.org>
Signed-off-by: Mathieu Malaterre <malat@debian.org>
Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Sun, 23 Jun 2019 10:41:52 +0000 (20:41 +1000)]
powerpc/64s/radix: keep kernel ERAT over local process/guest invalidates
ISA v3.0 radix modes provide SLBIA variants which can invalidate ERAT
for effPID!=0 or for effLPID!=0, which allows user and guest
invalidations to retain kernel/host ERAT entries.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Sun, 23 Jun 2019 10:41:51 +0000 (20:41 +1000)]
powerpc/64s: Rename PPC_INVALIDATE_ERAT to PPC_ISA_3_0_INVALIDATE_ERAT
This makes it clear to the caller that it can only be used on POWER9
and later CPUs.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
[mpe: Use "ISA_3_0" rather than "ARCH_300"]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Fri, 28 Jun 2019 06:33:22 +0000 (16:33 +1000)]
powerpc/64s/exception: simplify hmi control flow
Branch to the relocated 0xc000 address early (still in real mode), to
simplify subsequent branches. Have the virt mode handler avoid just
'windup' and redo the exception from scratch, rather than branching
back to the trampoline.
Rearrange the stack setup instruction location to match the system
reset handler (e.g., right before EXCEPTION_PROLOG_COMMON).
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Fri, 28 Jun 2019 06:33:21 +0000 (16:33 +1000)]
powerpc/64s/exception: hmi remove special case macro
No code change.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Fri, 28 Jun 2019 06:33:20 +0000 (16:33 +1000)]
powerpc/64s/exception: sreset move trampoline ahead of common code
Follow convention and move tramp ahead of common.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Fri, 28 Jun 2019 06:33:19 +0000 (16:33 +1000)]
powerpc/64s/exception: optimise system_reset for idle, clean up non-idle case
The idle wake up code in the system reset interrupt is not very
optimal. There are two requirements: perform idle wake up quickly;
and save everything including CFAR for non-idle interrupts, with
no performance requirement.
The problem with placing the idle test in the middle of the handler
and using the normal handler code to save CFAR, is that it's quite
costly (e.g., mfcfar is serialising, speculative workarounds get
applied, SRR1 has to be reloaded, etc). It also prevents the standard
interrupt handler boilerplate being used.
This pain can be avoided by using a dedicated idle interrupt handler
at the start of the interrupt handler, which restores all registers
back to the way they were in case it was not an idle wake up. CFAR
is preserved without saving it before the non-idle case by making that
the fall-through, and idle is a taken branch.
Performance seems to be in the noise, but possibly around 0.5% faster,
the executed instructions certainly look better. The bigger benefit is
being able to drop in standard interrupt handlers after the idle code,
which helps with subsequent cleanup and consolidation.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
[mpe: Fixup BE by using DOTSYM for idle_return_gpr_loss call]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Fri, 28 Jun 2019 06:33:18 +0000 (16:33 +1000)]
powerpc/64s/exception: remove bad stack branch
The bad stack test in interrupt handlers has a few problems. For
performance it is taken in the common case, which is a fetch bubble
and a waste of i-cache.
For code development and maintainence, it requires yet another stack
frame setup routine, and that constrains all exception handlers to
follow the same register save pattern which inhibits future
optimisation.
Remove the test/branch and replace it with a trap. Teach the program
check handler to use the emergency stack for this case.
This does not result in quite so nice a message, however the SRR0 and
SRR1 of the crashed interrupt can be seen in r11 and r12, as is the
original r1 (adjusted by INT_FRAME_SIZE). These are the most important
parts to debugging the issue.
The original r9-12 and cr0 is lost, which is the main downside.
kernel BUG at linux/arch/powerpc/kernel/exceptions-64s.S:847!
Oops: Exception in kernel mode, sig: 5 [#1]
BE SMP NR_CPUS=2048 NUMA PowerNV
Modules linked in:
CPU: 0 PID: 1 Comm: swapper/0 Not tainted
NIP:
c000000000009108 LR:
c000000000cadbcc CTR:
c0000000000090f0
REGS:
c0000000fffcbd70 TRAP: 0700 Not tainted
MSR:
9000000000021032 <SF,HV,ME,IR,DR,RI> CR:
28222448 XER:
20040000
CFAR:
c000000000009100 IRQMASK: 0
GPR00:
000000000000003d fffffffffffffd00 c0000000018cfb00 c0000000f02b3166
GPR04:
fffffffffffffffd 0000000000000007 fffffffffffffffb 0000000000000030
GPR08:
0000000000000037 0000000028222448 0000000000000000 c000000000ca8de0
GPR12:
9000000002009032 c000000001ae0000 c000000000010a00 0000000000000000
GPR16:
0000000000000000 0000000000000000 0000000000000000 0000000000000000
GPR20:
c0000000f00322c0 c000000000f85200 0000000000000004 ffffffffffffffff
GPR24:
fffffffffffffffe 0000000000000000 0000000000000000 000000000000000a
GPR28:
0000000000000000 0000000000000000 c0000000f02b391c c0000000f02b3167
NIP [
c000000000009108] decrementer_common+0x18/0x160
LR [
c000000000cadbcc] .vsnprintf+0x3ec/0x4f0
Call Trace:
Instruction dump:
996d098a 994d098b 38610070 480246ed 48005518 60000000 38200000 718a4000
7c2a0b78 3821fd00 41c20008 e82d0970 <
0981fd00>
f92101a0 f9610170 f9810178
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Fri, 28 Jun 2019 05:33:32 +0000 (15:33 +1000)]
powerpc/tm: update comment about interrupt re-entrancy
Since the system reset interrupt began to use its own stack, and
machine check interrupts have done so for some time, r1 can be
changed without clearing MSR[RI], provided no other interrupts
(including SLB misses) are taken.
MSR[RI] does have to be cleared when using SCRATCH0, however.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Fri, 28 Jun 2019 05:33:31 +0000 (15:33 +1000)]
powerpc/64s/exception: move SET_SCRATCH0 into EXCEPTION_PROLOG_0
No generated code change.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Fri, 28 Jun 2019 05:33:30 +0000 (15:33 +1000)]
powerpc/64s/exception: denorm handler use standard scratch save macro
Although the 0x1500 interrupt only applies to bare metal, it is better
to just use the standard macro for scratch save.
Runtime code path remains unchanged (due to instruction patching).
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Fri, 28 Jun 2019 05:33:29 +0000 (15:33 +1000)]
powerpc/64s/exception: machine check use standard macros to save dar/dsisr
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Fri, 28 Jun 2019 05:33:28 +0000 (15:33 +1000)]
powerpc/64s/exception: add dar and dsisr options to exception macro
Some exception entry requires DAR and/or DSISR to be saved into the
paca exception save area. Add options to the standard exception
macros for these.
Generated code changes slightly due to code structure.
- 554: a6 02 72 7d mfdsisr r11
- 558: a8 00 4d f9 std r10,168(r13)
- 55c: b0 00 6d 91 stw r11,176(r13)
+ 554: a8 00 4d f9 std r10,168(r13)
+ 558: a6 02 52 7d mfdsisr r10
+ 55c: b0 00 4d 91 stw r10,176(r13)
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Fri, 28 Jun 2019 05:33:27 +0000 (15:33 +1000)]
powerpc/64s/exception: use common macro for windup
No generated code change.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Fri, 28 Jun 2019 05:33:26 +0000 (15:33 +1000)]
powerpc/64s/exception: shuffle windup code around
Restore all SPRs and CR up-front, these are longer latency
instructions. Move register restore around to maximise pairs of
adjacent loads (e.g., restore r0 next to r1).
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Fri, 28 Jun 2019 05:33:25 +0000 (15:33 +1000)]
powerpc/64s/exception: simplify hmi windup code
Duplicate the hmi windup code for both cases, rather than to put a
special case branch in the middle of it. Remove unused label. This
helps with later code consolidation.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Fri, 28 Jun 2019 05:33:24 +0000 (15:33 +1000)]
powerpc/64s/exception: move machine check windup in_mce handling
Move in_mce decrement earlier before registers are restored (but
still after RI=0). This helps with later consolidation.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Fri, 28 Jun 2019 05:33:23 +0000 (15:33 +1000)]
powerpc/64s/exception: windup use r9 consistently to restore SPRs
Trivial code change, r3->r9.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Fri, 28 Jun 2019 05:33:22 +0000 (15:33 +1000)]
powerpc/64s/exception: mtmsrd L=1 cleanup
All supported 64s CPUs support mtmsrd L=1 instruction, so a cleanup
can be made in sreset and mce handlers.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Fri, 28 Jun 2019 05:33:21 +0000 (15:33 +1000)]
powerpc/64s/exception: avoid SPR RAW scoreboard stall in real mode entry
Move SPR reads ahead of writes. Real mode entry that is not a KVM
guest is rare these days, but bad practice propagates.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Fri, 28 Jun 2019 05:33:20 +0000 (15:33 +1000)]
powerpc/64s/exception: clean up system call entry
syscall / hcall entry unnecessarily differs between KVM and non-KVM
builds. Move the SMT priority instruction to the same location
(after INTERRUPT_TO_KERNEL).
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Sat, 22 Jun 2019 13:15:35 +0000 (23:15 +1000)]
powerpc/64s/exception: move paca save area offsets into exception-64s.S
No generated code change.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Sat, 22 Jun 2019 13:15:34 +0000 (23:15 +1000)]
powerpc/64s/exception: remove pointless EXCEPTION_PROLOG macro indirection
No generated code change. Final vmlinux is changed only due to change
in bug table line numbers.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Sat, 22 Jun 2019 13:15:33 +0000 (23:15 +1000)]
powerpc/64s/exception: generate regs clear instructions using .rept
No generated code change.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Sat, 22 Jun 2019 13:15:32 +0000 (23:15 +1000)]
powerpc/64s/exception: fix indenting irregularities
Generally, macros that result in instructions being expanded are
indented by a tab, and those that don't have no indent. Fix the
obvious cases that go contrary to style.
No generated code change.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Sat, 22 Jun 2019 13:15:31 +0000 (23:15 +1000)]
powerpc/64s/exception: use a gas macro for system call handler code
No generated code change.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Sat, 22 Jun 2019 13:15:30 +0000 (23:15 +1000)]
powerpc/64s/exception: remove unused BRANCH_TO_COMMON
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Sat, 22 Jun 2019 13:15:29 +0000 (23:15 +1000)]
powerpc/64s/exception: remove __BRANCH_TO_KVM
No generated code change.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Sat, 22 Jun 2019 13:15:28 +0000 (23:15 +1000)]
powerpc/64s/exception: move head-64.h code to exception-64s.S where it is used
No generated code change.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Sat, 22 Jun 2019 13:15:27 +0000 (23:15 +1000)]
powerpc/64s/exception: move exception-64s.h code to exception-64s.S where it is used
No generated code change.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Sat, 22 Jun 2019 13:15:26 +0000 (23:15 +1000)]
powerpc/64s/exception: move KVM related code together
No generated code change.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Sat, 22 Jun 2019 13:15:25 +0000 (23:15 +1000)]
powerpc/64s/exception: remove STD_EXCEPTION_COMMON variants
These are only called in one place each.
No generated code change.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Sat, 22 Jun 2019 13:15:24 +0000 (23:15 +1000)]
powerpc/64s/exception: move EXCEPTION_PROLOG_2* to a more logical place
No generated code change.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Sat, 22 Jun 2019 13:15:23 +0000 (23:15 +1000)]
powerpc/64s/exception: improve 0x500 handler code
After the previous cleanup, it becomes possible to consolidate some
common code outside the runtime alternate patching. Also remove
unused labels.
This results in some code change, but unchanged runtime instruction
sequence.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Sat, 22 Jun 2019 13:15:22 +0000 (23:15 +1000)]
powerpc/64s/exception: unwind exception-64s.h macros
Many of these macros just specify 1-4 lines which are only called a
few times each at most, and often just once. Remove this indirection.
No generated code change.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Sat, 22 Jun 2019 13:15:21 +0000 (23:15 +1000)]
powerpc/64s/exception: Move EXCEPTION_COMMON additions into callers
More cases of code insertion via macros that does not add a great
deal. All the additions have to be specified in the macro arguments,
so they can just as well go after the macro.
No generated code change.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Sat, 22 Jun 2019 13:15:20 +0000 (23:15 +1000)]
powerpc/64s/exception: Move EXCEPTION_COMMON handler and return branches into callers
The aim is to reduce the amount of indirection it takes to get through
the exception handler macros, particularly where it provides little
code sharing.
No generated code change.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Sat, 22 Jun 2019 13:15:19 +0000 (23:15 +1000)]
powerpc/64s/exception: Make EXCEPTION_PROLOG_0 a gas macro for consistency with others
No generated code change.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Sat, 22 Jun 2019 13:15:18 +0000 (23:15 +1000)]
powerpc/64s/exception: KVM handler can set the HSRR trap bit
Move the KVM trap HSRR bit into the KVM handler, which can be
conditionally applied when hsrr parameter is set.
No generated code change.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Sat, 22 Jun 2019 13:15:17 +0000 (23:15 +1000)]
powerpc/64s/exception: merge KVM handler and skip variants
Conditionally expand the skip case if it is specified.
No generated code change.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Sat, 22 Jun 2019 13:15:16 +0000 (23:15 +1000)]
powerpc/64s/exception: consolidate maskable and non-maskable prologs
Conditionally expand the soft-masking test if a mask is passed in.
No generated code change.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Sat, 22 Jun 2019 13:15:15 +0000 (23:15 +1000)]
powerpc/64s/exception: remove the "extra" macro parameter
Rather than pass in the soft-masking and KVM tests via macro that is
passed to another macro to expand it, switch to usig gas macros and
conditionally expand the soft-masking and KVM tests.
The system reset with its idle test is open coded as it is a one-off.
No generated code change.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Sat, 22 Jun 2019 13:15:14 +0000 (23:15 +1000)]
powerpc/64s/exception: fix sreset KVM test code
The sreset handler KVM test theoretically should not depend on P7.
In practice KVM now only supports P7 and up so no real bug fix, but
this change is made now so the quirk is not propagated through
cleanup patches.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Sat, 22 Jun 2019 13:15:13 +0000 (23:15 +1000)]
powerpc/64s/exception: move and tidy EXCEPTION_PROLOG_2 variants
- Re-name the macros to _REAL and _VIRT suffixes rather than no and
_RELON suffix.
- Move the macro definitions together in the file.
- Move RELOCATABLE ifdef inside the _VIRT macro.
Further consolidation between variants does not buy much here.
No generated code change.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Sat, 22 Jun 2019 13:15:12 +0000 (23:15 +1000)]
powerpc/64s/exception: consolidate EXCEPTION_PROLOG_2 with _NORI variant
Switch to a gas macro that conditionally expands the RI clearing
instruction.
No generated code change.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Sat, 22 Jun 2019 13:15:11 +0000 (23:15 +1000)]
powerpc/64s/exception: remove H concatenation for EXC_HV variants
Replace all instances of this with gas macros that test the hsrr
parameter and use the appropriate register names / labels.
No generated code change.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
[mpe: Remove extraneous 2nd check for 0xea0 in SOFTEN_TEST]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Ellerman [Tue, 2 Jul 2019 08:20:52 +0000 (18:20 +1000)]
powerpc/64s/exception: Remove unused SOFTEN_VALUE_0x980
Remove SOFTEN_VALUE_0x980, it's been unused since commit
dabe859ec636 ("powerpc: Give hypervisor decrementer interrupts their
own handler") (Sep 2012).
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Tue, 11 Jun 2019 14:30:13 +0000 (00:30 +1000)]
powerpc/64s/exception: fix line wrap and semicolon inconsistencies in macros
By convention, all lines should be separated by a semicolons. Last line
should have neither semicolon or line wrap.
No generated code change.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christoph Hellwig [Tue, 25 Jun 2019 14:52:39 +0000 (16:52 +0200)]
powerpc/powernv: remove the unused vas_win_paste_addr and vas_win_id functions
These two function have never been used anywhere in the kernel tree
since they were added to the kernel.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christoph Hellwig [Tue, 25 Jun 2019 14:52:38 +0000 (16:52 +0200)]
powerpc/powernv: remove unused NPU DMA code
None of these routines were ever used anywhere in the kernel tree
since they were added to the kernel.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christoph Hellwig [Tue, 25 Jun 2019 14:52:37 +0000 (16:52 +0200)]
powerpc/powernv: remove the unused tunneling exports
These have been unused anywhere in the kernel tree ever since they've
been added to the kernel.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christoph Hellwig [Tue, 25 Jun 2019 14:52:36 +0000 (16:52 +0200)]
powerpc/powernv: remove the unused pnv_pci_set_p2p function
This function has never been used anywhere in the kernel tree since it
was added to the tree. We also now have proper PCIe P2P APIs in the core
kernel, and any new P2P support should be using those.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Naveen N. Rao [Thu, 27 Jun 2019 09:59:40 +0000 (15:29 +0530)]
powerpc/xmon: Fix disabling tracing while in xmon
Commit
ed49f7fd6438d ("powerpc/xmon: Disable tracing when entering
xmon") added code to disable recording trace entries while in xmon. The
commit introduced a variable 'tracing_enabled' to record if tracing was
enabled on xmon entry, and used this to conditionally enable tracing
during exit from xmon.
However, we are not checking the value of 'fromipi' variable in
xmon_core() when setting 'tracing_enabled'. Due to this, when secondary
cpus enter xmon, they will see tracing as being disabled already and
tracing won't be re-enabled on exit. Fix the same.
Fixes:
ed49f7fd6438d ("powerpc/xmon: Disable tracing when entering xmon")
Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Denis Efremov [Sun, 23 Jun 2019 15:52:00 +0000 (18:52 +0300)]
selftests/powerpc: ppc_asm.h: typo in the header guard
The guard macro __PPC_ASM_H in the header ppc_asm.h
doesn't match the #ifndef macro _PPC_ASM_H. The patch
makes them the same.
Signed-off-by: Denis Efremov <efremov@linux.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Qian Cai [Thu, 6 Jun 2019 13:58:13 +0000 (09:58 -0400)]
powerpc/cacheflush: fix variable set but not used
The powerpc's flush_cache_vmap() is defined as a macro and never use
both of its arguments, so it will generate a compilation warning,
lib/ioremap.c: In function 'ioremap_page_range':
lib/ioremap.c:203:16: warning: variable 'start' set but not used
[-Wunused-but-set-variable]
Fix it by making it an inline function.
Signed-off-by: Qian Cai <cai@lca.pw>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Qian Cai [Wed, 5 Jun 2019 20:46:19 +0000 (16:46 -0400)]
powerpc/eeh_cache: fix a W=1 kernel-doc warning
The opening comment mark "/**" is reserved for kernel-doc comments, so
it will generate a warning with "make W=1".
arch/powerpc/kernel/eeh_cache.c:37: warning: cannot understand function
prototype: 'struct pci_io_addr_range
Since this is not a kernel-doc for the struct below, but rather an
overview of this source eeh_cache.c, just use the free-form comments
kernel-doc syntax instead.
Signed-off-by: Qian Cai <cai@lca.pw>
Acked-by: Russell Currey <ruscur@russell.cc>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Tue, 7 May 2019 13:31:38 +0000 (13:31 +0000)]
powerpc/ftrace: Enable C Version of recordmcount
Selects HAVE_C_RECORDMCOUNT to use the C version of the recordmcount
intead of the old Perl Version of recordmcount.
This should improve build time. It also seems like the old Perl Version
misses some calls to _mcount that the C version finds.
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Naveen N. Rao [Wed, 26 Jun 2019 18:38:01 +0000 (00:08 +0530)]
recordmcount: Fix spurious mcount entries on powerpc
An impending change to enable HAVE_C_RECORDMCOUNT on powerpc leads to
warnings such as the following:
# modprobe kprobe_example
ftrace-powerpc: Not expected bl: opcode is
3c4c0001
WARNING: CPU: 0 PID: 227 at kernel/trace/ftrace.c:2001 ftrace_bug+0x90/0x318
Modules linked in:
CPU: 0 PID: 227 Comm: modprobe Not tainted 5.2.0-rc6-00678-g1c329100b942 #2
NIP:
c000000000264318 LR:
c00000000025d694 CTR:
c000000000f5cd30
REGS:
c000000001f2b7b0 TRAP: 0700 Not tainted (5.2.0-rc6-00678-g1c329100b942)
MSR:
900000010282b033 <SF,HV,VEC,VSX,EE,FP,ME,IR,DR,RI,LE,TM[E]> CR:
28228222 XER:
00000000
CFAR:
c0000000002642fc IRQMASK: 0
<snip>
NIP [
c000000000264318] ftrace_bug+0x90/0x318
LR [
c00000000025d694] ftrace_process_locs+0x4f4/0x5e0
Call Trace:
[
c000000001f2ba40] [
0000000000000004] 0x4 (unreliable)
[
c000000001f2bad0] [
c00000000025d694] ftrace_process_locs+0x4f4/0x5e0
[
c000000001f2bb90] [
c00000000020ff10] load_module+0x25b0/0x30c0
[
c000000001f2bd00] [
c000000000210cb0] sys_finit_module+0xc0/0x130
[
c000000001f2be20] [
c00000000000bda4] system_call+0x5c/0x70
Instruction dump:
419e0018 2f83ffff 419e00bc 2f83ffea 409e00cc 4800001c 0fe00000 3c62ff96
39000001 39400000 386386d0 480000c4 <
0fe00000>
3ce20003 39000001 3c62ff96
---[ end trace
4c438d5cebf78381 ]---
ftrace failed to modify
[<
c0080000012a0008>] 0xc0080000012a0008
actual: 01:00:4c:3c
Initializing ftrace call sites
ftrace record flags: 2000000
(0)
expected tramp:
c00000000006af4c
Looking at the relocation records in __mcount_loc shows a few spurious
entries:
RELOCATION RECORDS FOR [__mcount_loc]:
OFFSET TYPE VALUE
0000000000000000 R_PPC64_ADDR64 .text.unlikely+0x0000000000000008
0000000000000008 R_PPC64_ADDR64 .text.unlikely+0x0000000000000014
0000000000000010 R_PPC64_ADDR64 .text.unlikely+0x0000000000000060
0000000000000018 R_PPC64_ADDR64 .text.unlikely+0x00000000000000b4
0000000000000020 R_PPC64_ADDR64 .init.text+0x0000000000000008
0000000000000028 R_PPC64_ADDR64 .init.text+0x0000000000000014
The first entry in each section is incorrect. Looking at the
relocation records, the spurious entries correspond to the
R_PPC64_ENTRY records:
RELOCATION RECORDS FOR [.text.unlikely]:
OFFSET TYPE VALUE
0000000000000000 R_PPC64_REL64 .TOC.-0x0000000000000008
0000000000000008 R_PPC64_ENTRY *ABS*
0000000000000014 R_PPC64_REL24 _mcount
<snip>
The problem is that we are not validating the return value from
get_mcountsym() in sift_rel_mcount(). With this entry, mcountsym is 0,
but Elf_r_sym(relp) also ends up being 0. Fix this by ensuring
mcountsym is valid before processing the entry.
Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Acked-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Tested-by: Satheesh Rajendran <sathnaga@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nathan Lynch [Fri, 21 Jun 2019 06:05:18 +0000 (01:05 -0500)]
powerpc/rtas: retry when cpu offline races with suspend/migration
The protocol for suspending or migrating an LPAR requires all present
processor threads to enter H_JOIN. So if we have threads offline, we
have to temporarily bring them up. This can race with administrator
actions such as SMT state changes. As of
dfd718a2ed1f ("powerpc/rtas:
Fix a potential race between CPU-Offline & Migration"),
rtas_ibm_suspend_me() accounts for this, but errors out with -EBUSY
for what almost certainly is a transient condition in any reasonable
scenario.
Callers of rtas_ibm_suspend_me() already retry when -EAGAIN is
returned, and it is typical during a migration for that to happen
repeatedly for several minutes polling the H_VASI_STATE hcall result
before proceeding to the next stage.
So return -EAGAIN instead of -EBUSY when this race is
encountered. Additionally: logging this event is still appropriate but
use pr_info instead of pr_err; and remove use of unlikely() while here
as this is not a hot path at all.
Fixes:
dfd718a2ed1f ("powerpc/rtas: Fix a potential race between CPU-Offline & Migration")
Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Neuling [Mon, 13 May 2019 05:39:10 +0000 (15:39 +1000)]
powerpc: Document xive=off option
commit
243e25112d06 ("powerpc/xive: Native exploitation of the XIVE
interrupt controller") added an option to turn off Linux native XIVE
usage via the xive=off kernel command line option.
This documents this option.
Signed-off-by: Michael Neuling <mikey@neuling.org>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Acked-by: Stewart Smith <stewart@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Ellerman [Mon, 1 Jul 2019 04:04:39 +0000 (14:04 +1000)]
Merge branch 'fixes' into next
Merge our fixes branch into next, this brings in a number of commits
that fix bugs we don't want to hit in next, in particular the fix for
CVE-2019-12817.
Michael Ellerman [Mon, 1 Jul 2019 03:59:40 +0000 (13:59 +1000)]
Merge tag 'powerpc-5.2-6' into fixes
This merges the commits that were the fix for CVE-2019-12817, which was
developed under embargo. They have already been merged by Linus
Merge them into fixes now so that this branch contains all the fixes for
this release.
Nicholas Piggin [Fri, 21 Jun 2019 22:55:54 +0000 (08:55 +1000)]
powerpc/64s/exception: Fix machine check early corrupting AMR
The early machine check runs in real mode, so locking is unnecessary.
Worse, the windup does not restore AMR, so this can result in a false
KUAP fault after a recoverable machine check hits inside a user copy
operation.
Fix this similarly to HMI by just avoiding the kuap lock in the
early machine check handler (it will be set by the late handler that
runs in virtual mode if that runs). If the virtual mode handler is
reached, it will lock and restore the AMR.
Fixes:
890274c2dc4c0 ("powerpc/64s: Implement KUAP for Radix MMU")
Cc: Russell Currey <ruscur@russell.cc>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Suraj Jitindar Singh [Thu, 20 Jun 2019 01:46:51 +0000 (11:46 +1000)]
KVM: PPC: Book3S HV: Clear pending decrementer exceptions on nested guest entry
If we enter an L1 guest with a pending decrementer exception then this
is cleared on guest exit if the guest has writtien a positive value
into the decrementer (indicating that it handled the decrementer
exception) since there is no other way to detect that the guest has
handled the pending exception and that it should be dequeued. In the
event that the L1 guest tries to run a nested (L2) guest immediately
after this and the L2 guest decrementer is negative (which is loaded
by L1 before making the H_ENTER_NESTED hcall), then the pending
decrementer exception isn't cleared and the L2 entry is blocked since
L1 has a pending exception, even though L1 may have already handled
the exception and written a positive value for it's decrementer. This
results in a loop of L1 trying to enter the L2 guest and L0 blocking
the entry since L1 has an interrupt pending with the outcome being
that L2 never gets to run and hangs.
Fix this by clearing any pending decrementer exceptions when L1 makes
the H_ENTER_NESTED hcall since it won't do this if it's decrementer
has gone negative, and anyway it's decrementer has been communicated
to L0 in the hdec_expires field and L0 will return control to L1 when
this goes negative by delivering an H_DECREMENTER exception.
Fixes:
95a6432ce903 ("KVM: PPC: Book3S HV: Streamlined guest entry/exit path on P9 for radix guests")
Cc: stable@vger.kernel.org # v4.20+
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Suraj Jitindar Singh [Thu, 20 Jun 2019 01:46:50 +0000 (11:46 +1000)]
KVM: PPC: Book3S HV: Signed extend decrementer value if not using large decrementer
On POWER9 the decrementer can operate in large decrementer mode where
the decrementer is 56 bits and signed extended to 64 bits. When not
operating in this mode the decrementer behaves as a 32 bit decrementer
which is NOT signed extended (as on POWER8).
Currently when reading a guest decrementer value we don't take into
account whether the large decrementer is enabled or not, and this
means the value will be incorrect when the guest is not using the
large decrementer. Fix this by sign extending the value read when the
guest isn't using the large decrementer.
Fixes:
95a6432ce903 ("KVM: PPC: Book3S HV: Streamlined guest entry/exit path on P9 for radix guests")
Cc: stable@vger.kernel.org # v4.20+
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Suraj Jitindar Singh [Thu, 20 Jun 2019 01:46:49 +0000 (11:46 +1000)]
KVM: PPC: Book3S HV: Invalidate ERAT when flushing guest TLB entries
When a guest vcpu moves from one physical thread to another it is
necessary for the host to perform a tlb flush on the previous core if
another vcpu from the same guest is going to run there. This is because the
guest may use the local form of the tlb invalidation instruction meaning
stale tlb entries would persist where it previously ran. This is handled
on guest entry in kvmppc_check_need_tlb_flush() which calls
flush_guest_tlb() to perform the tlb flush.
Previously the generic radix__local_flush_tlb_lpid_guest() function was
used, however the functionality was reimplemented in flush_guest_tlb()
to avoid the trace_tlbie() call as the flushing may be done in real
mode. The reimplementation in flush_guest_tlb() was missing an erat
invalidation after flushing the tlb.
This lead to observable memory corruption in the guest due to the
caching of stale translations. Fix this by adding the erat invalidation.
Fixes:
70ea13f6e609 ("KVM: PPC: Book3S HV: Flush TLB on secondary radix threads")
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Alexey Kardashevskiy [Wed, 5 Jun 2019 03:38:14 +0000 (13:38 +1000)]
powerpc/pci/of: Fix OF flags parsing for 64bit BARs
When the firmware does PCI BAR resource allocation, it passes the assigned
addresses and flags (prefetch/64bit/...) via the "reg" property of
a PCI device device tree node so the kernel does not need to do
resource allocation.
The flags are stored in resource::flags - the lower byte stores
PCI_BASE_ADDRESS_SPACE/etc bits and the other bytes are IORESOURCE_IO/etc.
Some flags from PCI_BASE_ADDRESS_xxx and IORESOURCE_xxx are duplicated,
such as PCI_BASE_ADDRESS_MEM_PREFETCH/PCI_BASE_ADDRESS_MEM_TYPE_64/etc.
When parsing the "reg" property, we copy the prefetch flag but we skip
on PCI_BASE_ADDRESS_MEM_TYPE_64 which leaves the flags out of sync.
The missing IORESOURCE_MEM_64 flag comes into play under 2 conditions:
1. we remove PCI_PROBE_ONLY for pseries (by hacking pSeries_setup_arch()
or by passing "/chosen/linux,pci-probe-only");
2. we request resource alignment (by passing pci=resource_alignment=
via the kernel cmd line to request PAGE_SIZE alignment or defining
ppc_md.pcibios_default_alignment which returns anything but 0). Note that
the alignment requests are ignored if PCI_PROBE_ONLY is enabled.
With 1) and 2), the generic PCI code in the kernel unconditionally
decides to:
- reassign the BARs in pci_specified_resource_alignment() (works fine)
- write new BARs to the device - this fails for 64bit BARs as the generic
code looks at IORESOURCE_MEM_64 (not set) and writes only lower 32bits
of the BAR and leaves the upper 32bit unmodified which breaks BAR mapping
in the hypervisor.
This fixes the issue by copying the flag. This is useful if we want to
enforce certain BAR alignment per platform as handling subpage sized BARs
is proven to cause problems with hotplug (SLOF already aligns BARs to 64k).
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: Sam Bobroff <sbobroff@linux.ibm.com>
Reviewed-by: Oliver O'Halloran <oohall@gmail.com>
Reviewed-by: Shawn Anastasio <shawn@anastas.io>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christoph Hellwig [Thu, 13 Jun 2019 08:24:46 +0000 (10:24 +0200)]
powerpc: enable a 30-bit ZONE_DMA for 32-bit pmac
With the strict dma mask checking introduced with the switch to
the generic DMA direct code common wifi chips on 32-bit powerbooks
stopped working. Add a 30-bit ZONE_DMA to the 32-bit pmac builds
to allow them to reliably allocate dma coherent memory.
Fixes:
65a21b71f948 ("powerpc/dma: remove dma_nommu_dma_supported")
Reported-by: Aaro Koskinen <aaro.koskinen@iki.fi>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Tested-by: Larry Finger <Larry.Finger@lwfinger.net>
Acked-by: Larry Finger <Larry.Finger@lwfinger.net>
Tested-by: Aaro Koskinen <aaro.koskinen@iki.fi>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Mon, 10 Jun 2019 03:08:18 +0000 (13:08 +1000)]
powerpc/64s/radix: Enable HAVE_ARCH_HUGE_VMAP
This sets the HAVE_ARCH_HUGE_VMAP option, and defines the required
page table functions.
This enables huge (2MB and 1GB) ioremap mappings. I don't have a
benchmark for this change, but huge vmap will be used by a later core
kernel change to enable huge vmalloc memory mappings. This improves
cached `git diff` performance by about 5% on a 2-node POWER9 with 32MB
size dentry cache hash.
Profiling git diff dTLB misses with a vanilla kernel:
81.75% git [kernel.vmlinux] [k] __d_lookup_rcu
7.21% git [kernel.vmlinux] [k] strncpy_from_user
1.77% git [kernel.vmlinux] [k] find_get_entry
1.59% git [kernel.vmlinux] [k] kmem_cache_free
40,168 dTLB-miss
0.
100342754 seconds time elapsed
With powerpc huge vmalloc:
2,987 dTLB-miss
0.
095933138 seconds time elapsed
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Mon, 10 Jun 2019 03:08:17 +0000 (13:08 +1000)]
powerpc/64s/radix: ioremap use ioremap_page_range
Radix can use ioremap_page_range for ioremap, after slab is available.
This makes it possible to enable huge ioremap mapping support.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Mon, 10 Jun 2019 03:08:16 +0000 (13:08 +1000)]
powerpc/64: __ioremap_at clean up in the error case
__ioremap_at error handling is wonky, it requires caller to clean up
after it. Implement a helper that does the map and error cleanup and
remove the requirement from the caller.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Anju T Sudhakar [Mon, 10 Jun 2019 06:32:29 +0000 (12:02 +0530)]
powerpc/perf: Use cpumask_last() to determine the designated cpu for nest/core units.
Nest and core IMC (In-Memory Collection counters) assigns a particular
cpu as the designated target for counter data collection. During
system boot, the first online cpu in a chip gets assigned as the
designated cpu for that chip(for nest-imc) and the first online cpu in
a core gets assigned as the designated cpu for that core(for
core-imc).
If the designated cpu goes offline, the next online cpu from the same
chip(for nest-imc)/core(for core-imc) is assigned as the next target,
and the event context is migrated to the target cpu. Currently,
cpumask_any_but() function is used to find the target cpu. Though this
function is expected to return a `random` cpu, this always returns the
next online cpu.
If all cpus in a chip/core is offlined in a sequential manner,
starting from the first cpu, the event migration has to happen for all
the cpus which goes offline. Since the migration process involves a
grace period, the total time taken to offline all the cpus will be
significantly high.
Example:
In a system which has 2 sockets, with
NUMA node0 CPU(s): 0-87
NUMA node8 CPU(s): 88-175
Time taken to offline cpu 88-175:
real 2m56.099s
user 0m0.191s
sys 0m0.000s
Use cpumask_last() to choose the target cpu, when the designated cpu
goes online, so the migration will happen only when the last_cpu in
the mask goes offline. This way the time taken to offline all cpus in
a chip/core can be reduced.
With the patch:
Time taken to offline cpu 88-175:
real 0m12.207s
user 0m0.171s
sys 0m0.000s
Offlining all cpus in reverse order is also taken care because,
cpumask_any_but() is used to find the designated cpu if the last cpu
in the mask goes offline. Since cpumask_any_but() always return the
first cpu in the mask, that becomes the designated cpu and migration
will happen only when the first_cpu in the mask goes offline.
Example: With the patch,
Time taken to offline cpu from 175-88:
real 0m9.330s
user 0m0.110s
sys 0m0.000s
Signed-off-by: Anju T Sudhakar <anju@linux.vnet.ibm.com>
Reviewed-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Shaokun Zhang [Wed, 29 May 2019 09:21:51 +0000 (17:21 +0800)]
powerpc/64s: Fix misleading SPR and timebase information
pr_info shows SPR and timebase as a decimal value with a '0x'
prefix, which is somewhat misleading.
Fix it to print hexadecimal, as was intended.
Fixes:
10d91611f426 ("powerpc/64s: Reimplement book3s idle code in C")
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Shaokun Zhang <zhangshaokun@hisilicon.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nathan Lynch [Tue, 28 May 2019 23:28:01 +0000 (18:28 -0500)]
powerpc/pseries: avoid blocking in irq when queuing hotplug events
A couple of bugs in queue_hotplug_event():
1. Unchecked kmalloc result which could lead to an oops.
2. Use of GFP_KERNEL allocations in interrupt context (this code's
only caller is ras_hotplug_interrupt()).
Use kmemdup to avoid open-coding the allocation+copy and check for
failure; use GFP_ATOMIC for both allocations.
Ultimately it probably would be better to avoid or reduce allocations
in this path if possible.
Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Ravi Bangoria [Thu, 13 Jun 2019 03:30:14 +0000 (09:00 +0530)]
powerpc/watchpoint: Restore NV GPRs while returning from exception
powerpc hardware triggers watchpoint before executing the instruction.
To make trigger-after-execute behavior, kernel emulates the
instruction. If the instruction is 'load something into non-volatile
register', exception handler should restore emulated register state
while returning back, otherwise there will be register state
corruption. eg, adding a watchpoint on a list can corrput the list:
# cat /proc/kallsyms | grep kthread_create_list
c00000000121c8b8 d kthread_create_list
Add watchpoint on kthread_create_list->prev:
# perf record -e mem:0xc00000000121c8c0
Run some workload such that new kthread gets invoked. eg, I just
logged out from console:
list_add corruption. next->prev should be prev (
c000000001214e00), \
but was
c00000000121c8b8. (next=
c00000000121c8b8).
WARNING: CPU: 59 PID: 309 at lib/list_debug.c:25 __list_add_valid+0xb4/0xc0
CPU: 59 PID: 309 Comm: kworker/59:0 Kdump: loaded Not tainted 5.1.0-rc7+ #69
...
NIP __list_add_valid+0xb4/0xc0
LR __list_add_valid+0xb0/0xc0
Call Trace:
__list_add_valid+0xb0/0xc0 (unreliable)
__kthread_create_on_node+0xe0/0x260
kthread_create_on_node+0x34/0x50
create_worker+0xe8/0x260
worker_thread+0x444/0x560
kthread+0x160/0x1a0
ret_from_kernel_thread+0x5c/0x70
List corruption happened because it uses 'load into non-volatile
register' instruction:
Snippet from __kthread_create_on_node:
c000000000136be8: addis r29,r2,-19
c000000000136bec: ld r29,31424(r29)
if (!__list_add_valid(new, prev, next))
c000000000136bf0: mr r3,r30
c000000000136bf4: mr r5,r28
c000000000136bf8: mr r4,r29
c000000000136bfc: bl
c00000000059a2f8 <__list_add_valid+0x8>
Register state from WARN_ON():
GPR00:
c00000000059a3a0 c000007ff23afb50 c000000001344e00 0000000000000075
GPR04:
0000000000000000 0000000000000000 0000001852af8bc1 0000000000000000
GPR08:
0000000000000001 0000000000000007 0000000000000006 00000000000004aa
GPR12:
0000000000000000 c000007ffffeb080 c000000000137038 c000005ff62aaa00
GPR16:
0000000000000000 0000000000000000 c000007fffbe7600 c000007fffbe7370
GPR20:
c000007fffbe7320 c000007fffbe7300 c000000001373a00 0000000000000000
GPR24:
fffffffffffffef7 c00000000012e320 c000007ff23afcb0 c000000000cb8628
GPR28:
c00000000121c8b8 c000000001214e00 c000007fef5b17e8 c000007fef5b17c0
Watchpoint hit at 0xc000000000136bec.
addis r29,r2,-19
=> r29 = 0xc000000001344e00 + (-19 << 16)
=> r29 = 0xc000000001214e00
ld r29,31424(r29)
=> r29 = *(0xc000000001214e00 + 31424)
=> r29 = *(0xc00000000121c8c0)
0xc00000000121c8c0 is where we placed a watchpoint and thus this
instruction was emulated by emulate_step. But because handle_dabr_fault
did not restore emulated register state, r29 still contains stale
value in above register state.
Fixes:
5aae8a5370802 ("powerpc, hw_breakpoints: Implement hw_breakpoints for 64-bit server processors")
Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com>
Cc: stable@vger.kernel.org # 2.6.36+
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Greg Kroah-Hartman [Wed, 12 Jun 2019 15:54:18 +0000 (17:54 +0200)]
cxl: no need to check return value of debugfs_create functions
When calling debugfs functions, there is no need to ever check the
return value. The function can work or not, but the code logic should
never do something different based on this.
Because there's no need to check, also make the return value of the
local debugfs_create_io_x64() call void, as no one ever did anything
with the return value (as they did not need to.)
And make the cxl_debugfs_* calls return void as no one was even checking
their return value at all.
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Reviewed-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Frederic Barrat <fbarrat@linux.ibm.com>
Acked-by: Andrew Donnellan <ajd@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Geert Uytterhoeven [Mon, 17 Jun 2019 09:07:13 +0000 (11:07 +0200)]
powerpc/ps3: Use [] to denote a flexible array member
Flexible array members should be denoted using [] instead of [0], else
gcc will not warn when they are no longer at the end of the structure.
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Andreas Schwab [Mon, 17 Jun 2019 21:22:20 +0000 (23:22 +0200)]
powerpc/mm/32s: fix condition that is always true
Move a misplaced paren that makes the condition always true.
Fixes:
63b2bc619565 ("powerpc/mm/32s: Use BATs for STRICT_KERNEL_RWX")
Cc: stable@vger.kernel.org # v5.1+
Signed-off-by: Andreas Schwab <schwab@linux-m68k.org>
Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Mon, 17 Jun 2019 21:42:14 +0000 (21:42 +0000)]
powerpc/32s: fix suspend/resume when IBATs 4-7 are used
Previously, only IBAT1 and IBAT2 were used to map kernel linear mem.
Since commit
63b2bc619565 ("powerpc/mm/32s: Use BATs for
STRICT_KERNEL_RWX"), we may have all 8 BATs used for mapping
kernel text. But the suspend/restore functions only save/restore
BATs 0 to 3, and clears BATs 4 to 7.
Make suspend and restore functions respectively save and reload
the 8 BATs on CPUs having MMU_FTR_USE_HIGH_BATS feature.
Reported-by: Andreas Schwab <schwab@linux-m68k.org>
Cc: stable@vger.kernel.org
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Gustavo Romero [Mon, 17 Jun 2019 21:24:58 +0000 (17:24 -0400)]
selftests/powerpc: Fix earlyclobber in tm-vmxcopy
In some cases, compiler can allocate the same register for operand 'res'
and 'vecoutptr', resulting in segfault at 'stxvd2x 40,0,%[vecoutptr]'
because base register will contain 1, yielding a false-positive.
This is because output 'res' must be marked as an earlyclobber operand so
it may not overlap an input operand ('vecoutptr').
Signed-off-by: Gustavo Romero <gromero@linux.vnet.ibm.com>
Reviewed-by: Segher Boessenkool <segher@kernel.crashing.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Suraj Jitindar Singh [Mon, 17 Jun 2019 07:16:19 +0000 (17:16 +1000)]
KVM: PPC: Book3S HV: Only write DAWR[X] when handling h_set_dawr in real mode
The hcall H_SET_DAWR is used by a guest to set the data address
watchpoint register (DAWR). This hcall is handled in the host in
kvmppc_h_set_dawr() which can be called in either real mode on the
guest exit path from hcall_try_real_mode() in book3s_hv_rmhandlers.S,
or in virtual mode when called from kvmppc_pseries_do_hcall() in
book3s_hv.c.
The function kvmppc_h_set_dawr() updates the dawr and dawrx fields in
the vcpu struct accordingly and then also writes the respective values
into the DAWR and DAWRX registers directly. It is necessary to write
the registers directly here when calling the function in real mode
since the path to re-enter the guest won't do this. However when in
virtual mode the host DAWR and DAWRX values have already been
restored, and so writing the registers would overwrite these.
Additionally there is no reason to write the guest values here as
these will be read from the vcpu struct and written to the registers
appropriately the next time the vcpu is run.
This also avoids the case when handling h_set_dawr for a nested guest
where the guest hypervisor isn't able to write the DAWR and DAWRX
registers directly and must rely on the real hypervisor to do this for
it when it calls H_ENTER_NESTED.
Fixes:
c1fe190c0672 ("powerpc: Add force enable of DAWR on P9 option")
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Neuling [Mon, 17 Jun 2019 07:16:18 +0000 (17:16 +1000)]
KVM: PPC: Book3S HV: Fix r3 corruption in h_set_dabr()
Commit
c1fe190c0672 ("powerpc: Add force enable of DAWR on P9 option")
screwed up some assembler and corrupted a pointer in r3. This resulted
in crashes like the below:
BUG: Kernel NULL pointer dereference at 0x000013bf
Faulting instruction address: 0xc00000000010b044
Oops: Kernel access of bad area, sig: 11 [#1]
LE PAGE_SIZE=64K MMU=Radix MMU=Hash SMP NR_CPUS=2048 NUMA pSeries
CPU: 8 PID: 1771 Comm: qemu-system-ppc Kdump: loaded Not tainted 5.2.0-rc4+ #3
NIP:
c00000000010b044 LR:
c0080000089dacf4 CTR:
c00000000010aff4
REGS:
c00000179b397710 TRAP: 0300 Not tainted (5.2.0-rc4+)
MSR:
800000000280b033 <SF,VEC,VSX,EE,FP,ME,IR,DR,RI,LE> CR:
42244842 XER:
00000000
CFAR:
c00000000010aff8 DAR:
00000000000013bf DSISR:
42000000 IRQMASK: 0
GPR00:
c0080000089dd6bc c00000179b3979a0 c008000008a04300 ffffffffffffffff
GPR04:
0000000000000000 0000000000000003 000000002444b05d c0000017f11c45d0
...
NIP kvmppc_h_set_dabr+0x50/0x68
LR kvmppc_pseries_do_hcall+0xa3c/0xeb0 [kvm_hv]
Call Trace:
0xc0000017f11c0000 (unreliable)
kvmppc_vcpu_run_hv+0x694/0xec0 [kvm_hv]
kvmppc_vcpu_run+0x34/0x48 [kvm]
kvm_arch_vcpu_ioctl_run+0x2f4/0x400 [kvm]
kvm_vcpu_ioctl+0x460/0x850 [kvm]
do_vfs_ioctl+0xe4/0xb40
ksys_ioctl+0xc4/0x110
sys_ioctl+0x28/0x80
system_call+0x5c/0x70
Instruction dump:
4082fff4 4c00012c 38600000 4e800020 e96280c0 896b0000 2c2b0000 3860ffff
4d820020 50852e74 508516f6 78840724 <
f88313c0>
f8a313c8 7c942ba6 7cbc2ba6
Fix the bug by only changing r3 when we are returning immediately.
Fixes:
c1fe190c0672 ("powerpc: Add force enable of DAWR on P9 option")
Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Reported-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Thu, 23 May 2019 08:39:27 +0000 (08:39 +0000)]
powerpc/32: fix build failure on book3e with KVM
Build failure was introduced by the commit identified below,
due to missed macro expension leading to wrong called function's name.
arch/powerpc/kernel/head_fsl_booke.o: In function `SystemCall':
arch/powerpc/kernel/head_fsl_booke.S:416: undefined reference to `kvmppc_handler_BOOKE_INTERRUPT_SYSCALL_SPRN_SRR1'
Makefile:1052: recipe for target 'vmlinux' failed
The called function should be kvmppc_handler_8_0x01B(). This patch fixes it.
Reported-by: Paul Mackerras <paulus@ozlabs.org>
Fixes:
1a4b739bbb4f ("powerpc/32: implement fast entry for syscalls on BOOKE")
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Fri, 10 May 2019 06:31:28 +0000 (06:31 +0000)]
powerpc/64: mark start_here_multiplatform as __ref
Otherwise, the following warning is encountered:
WARNING: vmlinux.o(.text+0x3dc6): Section mismatch in reference from the variable start_here_multiplatform to the function .init.text:.early_setup()
The function start_here_multiplatform() references
the function __init .early_setup().
This is often because start_here_multiplatform lacks a __init
annotation or the annotation of .early_setup is wrong.
Fixes:
56c46bba9bbf ("powerpc/64: Fix booting large kernels with STRICT_KERNEL_RWX")
Cc: Russell Currey <ruscur@russell.cc>
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Thu, 13 Jun 2019 13:52:30 +0000 (13:52 +0000)]
powerpc/booke: fix fast syscall entry on SMP
Use r10 instead of r9 to calculate CPU offset as r9 contains
the value from SRR1 which is used later.
Fixes:
1a4b739bbb4f ("powerpc/32: implement fast entry for syscalls on BOOKE")
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Tue, 11 Jun 2019 15:47:20 +0000 (15:47 +0000)]
powerpc/32s: fix initial setup of segment registers on secondary CPU
The patch referenced below moved the loading of segment registers
out of load_up_mmu() in order to do it earlier in the boot sequence.
However, the secondary CPU still needs it to be done when loading up
the MMU.
Reported-by: Erhard F. <erhard_f@mailbox.org>
Fixes:
215b823707ce ("powerpc/32s: set up an early static hash table for KASAN")
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nathan Lynch [Wed, 12 Jun 2019 04:45:06 +0000 (23:45 -0500)]
powerpc/pseries/mobility: rebuild cacheinfo hierarchy post-migration
It's common for the platform to replace the cache device nodes after a
migration. Since the cacheinfo code is never informed about this, it
never drops its references to the source system's cache nodes, causing
it to wind up in an inconsistent state resulting in warnings and oopses
as soon as CPU online/offline occurs after the migration, e.g.
cache for /cpus/l3-cache@3113(Unified) refers to cache for /cpus/l2-cache@200d(Unified)
WARNING: CPU: 15 PID: 86 at arch/powerpc/kernel/cacheinfo.c:176 release_cache+0x1bc/0x1d0
[...]
NIP release_cache+0x1bc/0x1d0
LR release_cache+0x1b8/0x1d0
Call Trace:
release_cache+0x1b8/0x1d0 (unreliable)
cacheinfo_cpu_offline+0x1c4/0x2c0
unregister_cpu_online+0x1b8/0x260
cpuhp_invoke_callback+0x114/0xf40
cpuhp_thread_fun+0x270/0x310
smpboot_thread_fn+0x2c8/0x390
kthread+0x1b8/0x1c0
ret_from_kernel_thread+0x5c/0x68
Using device tree notifiers won't work since we want to rebuild the
hierarchy only after all the removals and additions have occurred and
the device tree is in a consistent state. Call cacheinfo_teardown()
before processing device tree updates, and rebuild the hierarchy
afterward.
Fixes:
410bccf97881 ("powerpc/pseries: Partition migration in the kernel")
Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com>
Reviewed-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>