platform/kernel/linux-rpi.git
7 years agopowerpc/smp: Document irq enable/disable after migrating IRQs
Michael Ellerman [Wed, 15 Feb 2017 09:49:54 +0000 (20:49 +1100)]
powerpc/smp: Document irq enable/disable after migrating IRQs

This code was until recently completely undocumented and even now the comment is
not very verbose.

We've already had one patch sent to remove the IRQ enable/disable because it's
"paradoxical and unnecessary". So document it thoroughly to save anyone else
from puzzling over it.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/mpc52xx: Don't select user-visible RTAS_PROC
Michael Ellerman [Fri, 17 Feb 2017 06:31:22 +0000 (17:31 +1100)]
powerpc/mpc52xx: Don't select user-visible RTAS_PROC

Otherwise we might select it when its dependenices aren't enabled,
leading to a build break.

It's default y anyway, so will be on unless someone disables it
manually.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/powernv: Document cxl dependency on special case in pnv_eeh_reset()
Andrew Donnellan [Fri, 22 Jul 2016 07:16:35 +0000 (17:16 +1000)]
powerpc/powernv: Document cxl dependency on special case in pnv_eeh_reset()

pnv_eeh_reset() has special handling for PEs whose primary bus is the
root bus or the bus immediately underneath the root port.

The cxl bi-modal card support added in b0b5e5918ad1 ("cxl: Add
cxl_check_and_switch_mode() API to switch bi-modal cards") relies on
this behaviour when hot-resetting the CAPI adapter following a mode
switch.  Document this in pnv_eeh_reset() so we don't accidentally break
it.

Suggested-by: Gavin Shan <gwshan@linux.vnet.ibm.com>
Signed-off-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com>
Reviewed-by: Gavin Shan <gwshan@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/eeh: Clean up and document event handling functions
Russell Currey [Wed, 19 Apr 2017 07:39:27 +0000 (17:39 +1000)]
powerpc/eeh: Clean up and document event handling functions

Remove unnecessary tags in eeh_handle_normal_event(), and add function
comments for eeh_handle_normal_event() and eeh_handle_special_event().

The only functional difference is that in the case of a PE reaching the
maximum number of failures, rather than one message telling you of this
and suggesting you reseat the device, there are two separate messages.

Suggested-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Russell Currey <ruscur@russell.cc>
Reviewed-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com>
Reviewed-by: Gavin Shan <gwshan@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/eeh: Avoid use after free in eeh_handle_special_event()
Russell Currey [Wed, 19 Apr 2017 07:39:26 +0000 (17:39 +1000)]
powerpc/eeh: Avoid use after free in eeh_handle_special_event()

eeh_handle_special_event() is called when an EEH event is detected but
can't be narrowed down to a specific PE.  This function looks through
every PE to find one in an erroneous state, then calls the regular event
handler eeh_handle_normal_event() once it knows which PE has an error.

However, if eeh_handle_normal_event() found that the PE cannot possibly
be recovered, it will free it, rendering the passed PE stale.
This leads to a use after free in eeh_handle_special_event() as it attempts to
clear the "recovering" state on the PE after eeh_handle_normal_event() returns.

Thus, make sure the PE is valid when attempting to clear state in
eeh_handle_special_event().

Fixes: 8a6b1bc70dbb ("powerpc/eeh: EEH core to handle special event")
Cc: stable@vger.kernel.org # v3.11+
Reported-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Russell Currey <ruscur@russell.cc>
Reviewed-by: Gavin Shan <gwshan@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agocxl: Mask slice error interrupts after first occurrence
Alastair D'Silva [Mon, 1 May 2017 00:53:31 +0000 (10:53 +1000)]
cxl: Mask slice error interrupts after first occurrence

In some situations, a faulty AFU slice may create an interrupt storm of
slice errors, rendering the machine unusable. Since these interrupts are
informational only, present the interrupt once, then mask it off to
prevent it from being retriggered until the AFU is reset.

Signed-off-by: Alastair D'Silva <alastair@d-silva.org>
Reviewed-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com>
Reviewed-by: Vaibhav Jain <vaibhav@linux.vnet.ibm.com>
Acked-by: Frederic Barrat <fbarrat@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agocxl: Route eeh events to all drivers in cxl_pci_error_detected()
Vaibhav Jain [Thu, 27 Apr 2017 05:28:22 +0000 (10:58 +0530)]
cxl: Route eeh events to all drivers in cxl_pci_error_detected()

Fix a boundary condition where in some cases an eeh event that results
in card reset isn't passed on to a driver attached to the virtual PCI
device associated with a slice. This will happen in case when a slice
attached device driver returns a value other than
PCI_ERS_RESULT_NEED_RESET from the eeh error_detected() callback. This
would result in an early return from cxl_pci_error_detected() and
other drivers attached to other AFUs on the card wont be notified.

The patch fixes this by making sure that all slice attached
device-drivers are notified and the return values from
error_detected() callback are aggregated in a scheme where request for
'disconnect' trumps all and 'none' trumps 'need_reset'.

Fixes: 9e8df8a21963 ("cxl: EEH support")
Cc: stable@vger.kernel.org # v4.3+
Signed-off-by: Vaibhav Jain <vaibhav@linux.vnet.ibm.com>
Reviewed-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com>
Acked-by: Frederic Barrat <fbarrat@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agocxl: Force context lock during EEH flow
Vaibhav Jain [Thu, 27 Apr 2017 05:23:25 +0000 (10:53 +0530)]
cxl: Force context lock during EEH flow

During an eeh event when the cxl card is fenced and card sysfs attr
perst_reloads_same_image is set following warning message is seen in the
kernel logs:

  Adapter context unlocked with 0 active contexts
  ------------[ cut here ]------------
  WARNING: CPU: 12 PID: 627 at
  ../drivers/misc/cxl/main.c:325 cxl_adapter_context_unlock+0x60/0x80 [cxl]

Even though this warning is harmless, it clutters the kernel log
during an eeh event. This warning is triggered as the EEH callback
cxl_pci_error_detected doesn't obtain a context-lock before forcibly
detaching all active context and when context-lock is released during
call to cxl_configure_adapter from cxl_pci_slot_reset, a warning in
cxl_adapter_context_unlock is triggered.

To fix this warning, we acquire the adapter context-lock via
cxl_adapter_context_lock() in the eeh callback
cxl_pci_error_detected() once all the virtual AFU PHBs are notified
and their contexts detached. The context-lock is released in
cxl_pci_slot_reset() after the adapter is successfully reconfigured
and before the we call the slot_reset callback on slice attached
device-drivers.

Fixes: 70b565bbdb91 ("cxl: Prevent adapter reset if an active context exists")
Cc: stable@vger.kernel.org # v4.9+
Reported-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com>
Signed-off-by: Vaibhav Jain <vaibhav@linux.vnet.ibm.com>
Acked-by: Frederic Barrat <fbarrat@linux.vnet.ibm.com>
Reviewed-by: Matthew R. Ochs <mrochs@linux.vnet.ibm.com>
Tested-by: Uma Krishnan <ukrishn@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/64: Allow CONFIG_RELOCATABLE if COMPILE_TEST
Nicholas Piggin [Wed, 19 Oct 2016 03:16:00 +0000 (14:16 +1100)]
powerpc/64: Allow CONFIG_RELOCATABLE if COMPILE_TEST

This was a hack we added to work around the allmodconfig build breaking, see
commit fb43e8477ed9 ("powerpc: Disable RELOCATABLE for COMPILE_TEST with
PPC64").

Since we merged the thin archives support in commit 43c9127d94d6 ("powerpc: Add
option to use thin archives") this hasn't been necessary, so remove it.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/xmon: Teach xmon oops about radix vectors
Michael Neuling [Thu, 16 Mar 2017 03:04:40 +0000 (14:04 +1100)]
powerpc/xmon: Teach xmon oops about radix vectors

Currently if we take an oops caused by an 0x380 or 0x480 exception, we get a
print which assumes SLB problems. With radix, these vectors have different
meanings.

This patch updates the oops message to reflect these different meanings.

Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/mm/hash: Fix off-by-one in comment about kernel contexts ids
Michael Ellerman [Fri, 28 Apr 2017 11:57:22 +0000 (21:57 +1000)]
powerpc/mm/hash: Fix off-by-one in comment about kernel contexts ids

Michal Suchánek noticed a comment in book3s/64/mmu-hash.h about the context ids
we use for the kernel was inconsistent with the code and other comments in the
same file.

It should read 1-4 not 1-5.

While we're touching it, update "address" to "addresses" which makes more sense
as it's referring to more than one address below.

Reported-by: Michal Suchánek <msuchanek@suse.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/pseries: Enable VFIO
Alexey Kardashevskiy [Fri, 24 Mar 2017 06:37:21 +0000 (17:37 +1100)]
powerpc/pseries: Enable VFIO

This enables VFIO on pseries host in order to allow VFIO in nested guest under
PR KVM or DPDK in a HV guest. This adds support of the VFIO_SPAPR_TCE_IOMMU
type.

This adds exchange() callback to allow TCE updates by the SPAPR TCE IOMMU
driver in VFIO.

This initializes DMA32 window parameters in iommu_table_group as as this does
not implement VFIO_SPAPR_TCE_v2_IOMMU and VFIO_SPAPR_TCE_IOMMU just reuses the
existing DMA32 window.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/powernv: Fix iommu table size calculation hook for small tables
Alexey Kardashevskiy [Thu, 13 Apr 2017 07:05:27 +0000 (17:05 +1000)]
powerpc/powernv: Fix iommu table size calculation hook for small tables

When the userspace requests a small TCE table (which takes less than
the system page size) and more than 1 TCE level, the existing code
returns a single page size which is a bug as each additional TCE level
requires at least one page and this is what
pnv_pci_ioda2_table_alloc_pages() does. And we end up seeing
WARN_ON(!ret && ((*ptbl)->it_allocated_size != table_size))
in drivers/vfio/vfio_iommu_spapr_tce.c.

This replaces incorrect _ALIGN_UP() (which aligns zero up to zero) with
max_t() to fix the bug.

Besides removing WARN_ON(), there should be no other changes in
behaviour.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/powernv: Check kzalloc() return value in pnv_pci_table_alloc
Alexey Kardashevskiy [Mon, 27 Mar 2017 08:27:37 +0000 (19:27 +1100)]
powerpc/powernv: Check kzalloc() return value in pnv_pci_table_alloc

pnv_pci_table_alloc() ignores possible failure from kzalloc_node(),
this adds a check. There are 2 callers of pnv_pci_table_alloc(),
one already checks for tbl!=NULL, this adds WARN_ON() to the other path
which only happens during boot time in IODA1 and not expected to fail.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc: Add arch/powerpc/tools directory
Nicholas Piggin [Sat, 26 Nov 2016 03:26:10 +0000 (14:26 +1100)]
powerpc: Add arch/powerpc/tools directory

Move a couple of existing scripts under there. Remove scripts directory:
a script is a tool, a tool is not a script.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc: Use the new post-link pass to check relocations
Nicholas Piggin [Sat, 26 Nov 2016 03:26:09 +0000 (14:26 +1100)]
powerpc: Use the new post-link pass to check relocations

Currently powerpc has to introduce a dependency on its default build
target zImage in order to run a relocation check pass over the linked
vmlinux. This is deficient because the check is not run if the plain
vmlinux target is built, or if one of the other boot targets is built.

Switch to using the kbuild post-link pass, added in commit fbe6e37dab97
("kbuild: add arch specific post-link Makefile") in order to run this
check. In future powerpc will use this to do more complicated operations,
but initially using it for something simple is a good first step.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/xmon: Wait for secondaries before IPI'ing on system reset
Nicholas Piggin [Mon, 19 Dec 2016 18:30:11 +0000 (04:30 +1000)]
powerpc/xmon: Wait for secondaries before IPI'ing on system reset

An externally triggered system reset (e.g., via QEMU nmi command, or pseries
reset button) can cause system reset interrupts on all CPUs. In case this causes
xmon to be entered, it is undesirable for the primary (first) CPU into xmon to
trigger an NMI IPI to others, because this may cause a nested system reset
interrupt.

So spin for a time waiting for secondaries to join xmon before performing the
NMI IPI, similarly to what the crash dump code does.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
[mpe: Only do it when we come in from system reset, not via sysrq etc.]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/pseries: Implement NMI IPI with H_SIGNAL_SYS_RESET
Nicholas Piggin [Mon, 19 Dec 2016 18:30:10 +0000 (04:30 +1000)]
powerpc/pseries: Implement NMI IPI with H_SIGNAL_SYS_RESET

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc: Add struct smp_ops_t.cause_nmi_ipi operation
Nicholas Piggin [Mon, 19 Dec 2016 18:30:09 +0000 (04:30 +1000)]
powerpc: Add struct smp_ops_t.cause_nmi_ipi operation

Have the NMI IPI code use this op when the platform defines it.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc: Add NMI IPI infrastructure
Nicholas Piggin [Mon, 19 Dec 2016 18:30:08 +0000 (04:30 +1000)]
powerpc: Add NMI IPI infrastructure

Add a simple NMI IPI system that handles concurrency and reentrancy.

The platform does not have to implement a true non-maskable interrupt,
the default is to simply use the debugger break IPI message. This has
now been co-opted for a general IPI message, and users (debugger and
crash) have been reimplemented on top of the NMI system.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
[mpe: Incorporate incremental fixes from Nick]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc: Mark system reset as an NMI with nmi_enter/exit()
Nicholas Piggin [Mon, 19 Dec 2016 18:30:07 +0000 (04:30 +1000)]
powerpc: Mark system reset as an NMI with nmi_enter/exit()

System reset is a non-maskable interrupt from Linux's point of view
(occurs under local_irq_disable()), so it should use nmi_enter/exit.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/64s: Dedicated system reset interrupt stack
Nicholas Piggin [Mon, 19 Dec 2016 18:30:06 +0000 (04:30 +1000)]
powerpc/64s: Dedicated system reset interrupt stack

The system reset interrupt is used for crash/debug situations, so it is
desirable to have as little impact on the normal state of the system as
possible.

Currently it uses the current kernel stack to process the exception.
This stores into the stack which may be involved with the crash. The
stack pointer may be corrupted, or it may have overflowed.

Avoid or minimise these problems by creating a dedicated NMI stack for
the system reset interrupt to use.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/64s: Disallow system reset vs system reset reentrancy
Nicholas Piggin [Mon, 19 Dec 2016 18:30:05 +0000 (04:30 +1000)]
powerpc/64s: Disallow system reset vs system reset reentrancy

In preparation for using a dedicated stack for system reset interrupts,
prevent a nested system reset from recovering, in order to simplify
code that is called in crash/debug path. This allows a system reset
interrupt to just use the base stack pointer.

Keep an in_nmi nesting counter similarly to the in_mce counter. Consider
the interrrupt non-recoverable if it is taken inside another system
reset.

Interrupt nesting could be allowed similarly to MCE, but system reset
is a special case that's not for normal operation, so simplicity wins
until there is requirement for nested system reset interrupts.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/64s: Fix system reset vs general interrupt reentrancy
Nicholas Piggin [Mon, 19 Dec 2016 18:30:04 +0000 (04:30 +1000)]
powerpc/64s: Fix system reset vs general interrupt reentrancy

The system reset interrupt can occur when MSR_EE=0, and it currently
uses the PACA_EXGEN save area.

Some PACA_EXGEN interrupts have a window where MSR_RI=1 and MSR_EE=0
when the save area is still in use. A system reset interrupt in this
window can lead to undetected corruption when the save area gets
overwritten.

This patch introduces PACA_EXNMI save area for system reset exceptions,
which closes this corruption window. It's also helpful to retain the
EXGEN state for debugging situations, even if not considering the
recoverability aspect.

This patch also moves the PACA_EXMC area down to a less frequently used
part of the paca with the new save area.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/64s: Exception macro for stack frame and initial register save
Nicholas Piggin [Mon, 19 Dec 2016 18:30:03 +0000 (04:30 +1000)]
powerpc/64s: Exception macro for stack frame and initial register save

This code is common to a few exceptions, and another user will be added.
This causes a trivial change to generated code:

-     604: std     r9,416(r1)
-     608: mfspr   r11,314
-     60c: std     r11,368(r1)
-     610: mfspr   r12,315
+     604: mfspr   r11,314
+     608: mfspr   r12,315
+     60c: std     r9,416(r1)
+     610: std     r11,368(r1)

machine_check_powernv_early could also use this, but that requires non
trivial changes to generated code, so that's for another patch.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/64s: Add exception macro that does not enable RI
Nicholas Piggin [Mon, 19 Dec 2016 18:30:02 +0000 (04:30 +1000)]
powerpc/64s: Add exception macro that does not enable RI

Subsequent patches will add more non-RI variant exceptions, so
create a macro for it rather than open-code it.

This does not change generated instructions.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/cbe: Do not process external or decremeter interrupts from sreset
Nicholas Piggin [Mon, 20 Mar 2017 06:31:49 +0000 (16:31 +1000)]
powerpc/cbe: Do not process external or decremeter interrupts from sreset

Cell will wake from low power state at the system reset interrupt,
with the event encoded in SRR1, rather than waking at the interrupt
vector that corresponds to that event.

The system reset handler for this platform decodes SRR1 event reason
and calls the interrupt handler to process it directly from the system
reset handlre.

A subsequent change will treat the system reset interrupt as a Linux NMI
with its own per-CPU stack, and this will no longer work. Remove the
external and decrementer handlers from the system reset handler.

- The external exception remains raised and will fire again at the
  EE interrupt vector when system reset returns.

- The decrementer is set to 1 so it will be raised again and fire when
  the system reset returns.

It is possible to branch to an idle handler from the system reset
interrupt (like POWER does), then restore a normal stack and restore
this optimisation. But simplicity wins for now.

Tested-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/pasemi: Do not process external or decrementer interrupts from sreset
Nicholas Piggin [Mon, 20 Mar 2017 06:31:48 +0000 (16:31 +1000)]
powerpc/pasemi: Do not process external or decrementer interrupts from sreset

PA Semi will wake from low power state at the system reset interrupt,
with the event encoded in SRR1, rather than waking at the interrupt
vector that corresponds to that event.

The system reset handler for this platform decodes SRR1 event reason
and calls the interrupt handler to process it directly from the system
reset handlre.

A subsequent change will treat the system reset interrupt as a Linux NMI
with its own per-CPU stack, and this will no longer work. Remove the
external and decrementer handlers from the system reset handler.

- The external exception remains raised and will fire again at the
  EE interrupt vector when system reset returns.

- The decrementer is set to 1 so it will be raised again and fire when
  the system reset returns.

It is possible to branch to an idle handler from the system reset
interrupt (like POWER does), then restore a normal stack and restore
this optimisation. But simplicity wins for now.

Tested-by: Christian Zigotzky <chzigotzky@xenosoft.de>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agoMerge branch 'topic/ppc-kvm' into next
Michael Ellerman [Fri, 28 Apr 2017 10:19:37 +0000 (20:19 +1000)]
Merge branch 'topic/ppc-kvm' into next

Merge the topic branch we were sharing with kvm-ppc, Paul has also
merged it.

7 years agopowerpc/ftrace/64: Split further based on -mprofile-kernel
Naveen N. Rao [Tue, 25 Apr 2017 13:55:54 +0000 (19:25 +0530)]
powerpc/ftrace/64: Split further based on -mprofile-kernel

Split ftrace_64.S further retaining the core ftrace 64-bit aspects
in ftrace_64.S and moving ftrace_caller() and ftrace_graph_caller() into
separate files based on -mprofile-kernel. The livepatch routines are all
now contained within the mprofile file.

Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc: Split ftrace bits into a separate file
Naveen N. Rao [Tue, 25 Apr 2017 13:55:53 +0000 (19:25 +0530)]
powerpc: Split ftrace bits into a separate file

entry_*.S now includes a lot more than just kernel entry/exit code. As a
first step at cleaning this up, let's split out the ftrace bits into
separate files. Also move all related tracing code into a new trace/
subdirectory.

No functional changes.

Suggested-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/mm: Rename table dump file name
Christophe Leroy [Tue, 18 Apr 2017 06:20:15 +0000 (08:20 +0200)]
powerpc/mm: Rename table dump file name

Page table dump debugfs file is named 'kernel_page_tables' on
all other architectures implementing it, while is is named
'kernel_pagetables' on powerpc. This patch renames it.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/mm: On PPC32, display 32 bits addresses in page table dump
Christophe Leroy [Thu, 13 Apr 2017 12:41:40 +0000 (14:41 +0200)]
powerpc/mm: On PPC32, display 32 bits addresses in page table dump

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/mm: Fix missing page attributes in page table dump
Christophe Leroy [Fri, 14 Apr 2017 05:45:16 +0000 (07:45 +0200)]
powerpc/mm: Fix missing page attributes in page table dump

On some targets, _PAGE_RW is 0 and this is _PAGE_RO which is used.
There is also _PAGE_SHARED that is missing.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/mm: Fix page table dump build on PPC32
Christophe Leroy [Tue, 18 Apr 2017 06:20:13 +0000 (08:20 +0200)]
powerpc/mm: Fix page table dump build on PPC32

On PPC32 (eg. mpc885_ads_defconfig), page table dump compilation fails as
follows. This is because the memory layout is slightly different on PPC32. This
patch adapts it.

  arch/powerpc/mm/dump_linuxpagetables.c: In function 'walk_pagetables':
  arch/powerpc/mm/dump_linuxpagetables.c:369:10: error: 'KERN_VIRT_START' undeclared (first use in this function)
  ...

Fixes: 8eb07b187000d ("powerpc/mm: Dump linux pagetables")
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/mm/radix: Optimise tlbiel flush all case
Aneesh Kumar K.V [Wed, 26 Apr 2017 11:38:17 +0000 (21:38 +1000)]
powerpc/mm/radix: Optimise tlbiel flush all case

_tlbiel_pid() is called with a ric (Radix Invalidation Control) argument of
either RIC_FLUSH_TLB or RIC_FLUSH_ALL.

RIC_FLUSH_ALL says to invalidate the entire TLB and the Page Walk Cache (PWC).

To flush the whole TLB, we have to iterate over each set (congruence class) of
the TLB. Currently we do that and pass RIC_FLUSH_ALL each time. That is not
incorrect but it means we flush the PWC 128 times, when once would suffice.

Fix it by doing the first flush with the ric value we're passed, and then if it
was RIC_FLUSH_ALL, we downgrade it to RIC_FLUSH_TLB, because we know we have
just flushed the PWC and don't need to do it again.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
[mpe: Split out of combined patch, tweak logic, rewrite change log]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/mm/radix: Optimise Page Walk Cache flush
Aneesh Kumar K.V [Wed, 26 Apr 2017 11:38:30 +0000 (21:38 +1000)]
powerpc/mm/radix: Optimise Page Walk Cache flush

Currently we implement flushing of the page walk cache (PWC) by calling
_tlbiel_pid() with a RIC (Radix Invalidation Control) value of 1 which says to
only flush the PWC.

But _tlbiel_pid() loops over each set (congruence class) of the TLB, which is
not necessary when we're just flushing the PWC.

In fact the set argument is ignored for a PWC flush, so essentially we're just
flushing the PWC 127 extra times for no benefit.

Fix it by adding tlbiel_pwc() which just does a single flush of the PWC.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
[mpe: Split out of combined patch, drop _ in name, rewrite change log]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/powernv: Fix oops on P9 DD1 in cause_ipi()
Michael Ellerman [Wed, 26 Apr 2017 07:32:38 +0000 (17:32 +1000)]
powerpc/powernv: Fix oops on P9 DD1 in cause_ipi()

Recently we merged the native xive support for Power9, and then separately some
reworks for doorbell IPI support. In isolation both series were OK, but the
merged result had a bug in one case.

On P9 DD1 we use pnv_p9_dd1_cause_ipi() which tries to use doorbells, and then
falls back to the interrupt controller. However the fallback is implemented by
calling icp_ops->cause_ipi. But now that xive support is merged we might be
using xive, in which case icp_ops is not initialised, it's a xics specific
structure. This leads to an oops such as:

  Unable to handle kernel paging request for data at address 0x00000028
  Oops: Kernel access of bad area, sig: 11 [#1]
  NIP pnv_p9_dd1_cause_ipi+0x74/0xe0
  LR smp_muxed_ipi_message_pass+0x54/0x70

To fix it, rather than using icp_ops which might be NULL, have both xics and
xive set smp_ops->cause_ipi, and then in the powernv code we save that as
ic_cause_ipi before overriding smp_ops->cause_ipi. For paranoia add a WARN_ON()
to check if somehow smp_ops->cause_ipi is NULL.

Fixes: b866cc2199d6 ("powerpc: Change the doorbell IPI calling convention")
Tested-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/powernv: Fix missing attr initialisation in opal_export_attrs()
Michael Ellerman [Wed, 26 Apr 2017 05:57:19 +0000 (15:57 +1000)]
powerpc/powernv: Fix missing attr initialisation in opal_export_attrs()

In opal_export_attrs() we dynamically allocate some bin_attributes. They're
allocated with kmalloc() and although we initialise most of the fields, we don't
initialise write() or mmap(), and in particular we don't initialise the lockdep
related fields in the embedded struct attribute.

This leads to a lockdep warning at boot:

  BUG: key c0000000f11906d8 not in .data!
  WARNING: CPU: 0 PID: 1 at ../kernel/locking/lockdep.c:3136 lockdep_init_map+0x28c/0x2a0
  ...
  Call Trace:
    lockdep_init_map+0x288/0x2a0 (unreliable)
    __kernfs_create_file+0x8c/0x170
    sysfs_add_file_mode_ns+0xc8/0x240
    __machine_initcall_powernv_opal_init+0x60c/0x684
    do_one_initcall+0x60/0x1c0
    kernel_init_freeable+0x2f4/0x3d4
    kernel_init+0x24/0x160
    ret_from_kernel_thread+0x5c/0xb0

Fix it by kzalloc'ing the attr, which fixes the uninitialised write() and
mmap(), and calling sysfs_bin_attr_init() on it to initialise the lockdep
fields.

Fixes: 11fe909d2362 ("powerpc/powernv: Add OPAL exports attributes to sysfs")
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/mm: Fix possible out-of-bounds shift in arch_mmap_rnd()
Michael Ellerman [Tue, 25 Apr 2017 10:49:24 +0000 (20:49 +1000)]
powerpc/mm: Fix possible out-of-bounds shift in arch_mmap_rnd()

The recent patch to add runtime configuration of the ASLR limits added a bug in
arch_mmap_rnd() where we may shift an integer (32-bits) by up to 33 bits,
leading to undefined behaviour.

In practice it exhibits as every process seg faulting instantly, presumably
because the rnd value hasn't been restricited by the modulus at all. We didn't
notice because it only happens under certain kernel configurations and if the
number of bits is actually set to a large value.

Fix it by switching to unsigned long.

Fixes: 9fea59bd7ca5 ("powerpc/mm: Add support for runtime configuration of ASLR limits")
Reported-by: Balbir Singh <bsingharora@gmail.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/mm: Ensure IRQs are off in switch_mm()
David Gibson [Wed, 19 Apr 2017 06:38:26 +0000 (16:38 +1000)]
powerpc/mm: Ensure IRQs are off in switch_mm()

powerpc expects IRQs to already be (soft) disabled when switch_mm() is
called, as made clear in the commit message of 9c1e105238c4 ("powerpc: Allow
perf_counters to access user memory at interrupt time").

Aside from any race conditions that might exist between switch_mm() and an IRQ,
there is also an unconditional hard_irq_disable() in switch_slb(). If that isn't
followed at some point by an IRQ enable then interrupts will remain disabled
until we return to userspace.

It is true that when switch_mm() is called from the scheduler IRQs are off, but
not when it's called by use_mm(). Looking closer we see that last year in commit
f98db6013c55 ("sched/core: Add switch_mm_irqs_off() and use it in the scheduler")
this was made more explicit by the addition of switch_mm_irqs_off() which is now
called by the scheduler, vs switch_mm() which is used by use_mm().

Arguably it is a bug in use_mm() to call switch_mm() in a different context than
it expects, but fixing that will take time.

This was discovered recently when vhost started throwing warnings such as:

  BUG: sleeping function called from invalid context at kernel/mutex.c:578
  in_atomic(): 0, irqs_disabled(): 1, pid: 10768, name: vhost-10760
  no locks held by vhost-10760/10768.
  irq event stamp: 10
  hardirqs last  enabled at (9):  _raw_spin_unlock_irq+0x40/0x80
  hardirqs last disabled at (10): switch_slb+0x2e4/0x490
  softirqs last  enabled at (0):  copy_process+0x5e8/0x1260
  softirqs last disabled at (0):  (null)
  Call Trace:
    show_stack+0x88/0x390 (unreliable)
    dump_stack+0x30/0x44
    __might_sleep+0x1c4/0x2d0
    mutex_lock_nested+0x74/0x5c0
    cgroup_attach_task_all+0x5c/0x180
    vhost_attach_cgroups_work+0x58/0x80 [vhost]
    vhost_worker+0x24c/0x3d0 [vhost]
    kthread+0xec/0x100
    ret_from_kernel_thread+0x5c/0xd4

Prior to commit 04b96e5528ca ("vhost: lockless enqueuing") (Aug 2016) the
vhost_worker() would do a spin_unlock_irq() not long after calling use_mm(),
which had the effect of reenabling IRQs. Since that commit removed the locking
in vhost_worker() the body of the vhost_worker() loop now runs with interrupts
off causing the warnings.

This patch addresses the problem by making the powerpc code mirror the x86 code,
ie. we disable interrupts in switch_mm(), and optimise the scheduler case by
defining switch_mm_irqs_off().

Cc: stable@vger.kernel.org # v4.7+
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
[mpe: Flesh out/rewrite change log, add stable]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/sysfs: Fix reference leak of cpu device_nodes present at boot
Tyrel Datwyler [Tue, 18 Apr 2017 00:24:39 +0000 (20:24 -0400)]
powerpc/sysfs: Fix reference leak of cpu device_nodes present at boot

For CPUs present at boot each logical CPU acquires a reference to the
associated device node of the core. This happens in register_cpu() which
is called by topology_init(). The result of this is that we end up with
a reference held by each thread of the core. However, these references
are never freed if the CPU core is DLPAR removed.

This patch fixes the reference leaks by acquiring and releasing the references
in the CPU hotplug callbacks un/register_cpu_online(). With this patch symmetric
reference counting is observed with both CPUs present at boot, and those DLPAR
added after boot.

Fixes: f86e4718f24b ("driver/core: cpu: initialize of_node in cpu's device struture")
Cc: stable@vger.kernel.org # v3.12+
Signed-off-by: Tyrel Datwyler <tyreld@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/pseries: Fix of_node_put() underflow during DLPAR remove
Tyrel Datwyler [Tue, 18 Apr 2017 00:21:40 +0000 (20:21 -0400)]
powerpc/pseries: Fix of_node_put() underflow during DLPAR remove

Historically struct device_node references were tracked using a kref embedded as
a struct field. Commit 75b57ecf9d1d ("of: Make device nodes kobjects so they
show up in sysfs") (Mar 2014) refactored device_nodes to be kobjects such that
the device tree could by more simply exposed to userspace using sysfs.

Commit 0829f6d1f69e ("of: device_node kobject lifecycle fixes") (Mar 2014)
followed up these changes to better control the kobject lifecycle and in
particular the referecne counting via of_node_get(), of_node_put(), and
of_node_init().

A result of this second commit was that it introduced an of_node_put() call when
a dynamic node is detached, in of_node_remove(), that removes the initial kobj
reference created by of_node_init().

Traditionally as the original dynamic device node user the pseries code had
assumed responsibilty for releasing this final reference in its platform
specific DLPAR detach code.

This patch fixes a refcount underflow introduced by commit 0829f6d1f6, and
recently exposed by the upstreaming of the recount API.

Messages like the following are no longer seen in the kernel log with this
patch following DLPAR remove operations of cpus and pci devices.

  rpadlpar_io: slot PHB 72 removed
  refcount_t: underflow; use-after-free.
  ------------[ cut here ]------------
  WARNING: CPU: 5 PID: 3335 at lib/refcount.c:128 refcount_sub_and_test+0xf4/0x110

Fixes: 0829f6d1f69e ("of: device_node kobject lifecycle fixes")
Cc: stable@vger.kernel.org # v3.15+
Signed-off-by: Tyrel Datwyler <tyreld@linux.vnet.ibm.com>
[mpe: Make change log commit references more verbose]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/xmon: Deindent the SLB dumping logic
Michael Ellerman [Mon, 24 Apr 2017 00:35:14 +0000 (10:35 +1000)]
powerpc/xmon: Deindent the SLB dumping logic

Currently the code that dumps SLB entries uses a double-nested if. This
means the actual dumping logic is a bit squashed. Deindent it by using
continue.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Reviewed-by: Rashmica Gupta <rashmica.g@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agoMerge branch 'topic/kprobes' into next
Michael Ellerman [Mon, 24 Apr 2017 14:24:04 +0000 (00:24 +1000)]
Merge branch 'topic/kprobes' into next

Although most of these kprobes patches are powerpc specific, there's a couple
that touch generic code (with Acks). At the moment there's one conflict with
acme's tree, but it's not too bad. Still just in case some other conflicts show
up, we've put these in a topic branch so another tree could merge some or all of
it if necessary.

7 years agopowerpc/kprobes: Prefer ftrace when probing function entry
Naveen N. Rao [Wed, 19 Apr 2017 12:52:28 +0000 (18:22 +0530)]
powerpc/kprobes: Prefer ftrace when probing function entry

KPROBES_ON_FTRACE avoids much of the overhead of regular kprobes as it
eliminates the need for a trap, as well as the need to emulate or single-step
instructions.

Though OPTPROBES provides us with similar performance, we have limited
optprobes trampoline slots. As such, when asked to probe at a function
entry, default to using the ftrace infrastructure.

With:
  # cd /sys/kernel/debug/tracing
  # echo 'p _do_fork' > kprobe_events

before patch:
  # cat ../kprobes/list
  c0000000000daf08  k  _do_fork+0x8    [DISABLED]
  c000000000044fc0  k  kretprobe_trampoline+0x0    [OPTIMIZED]

and after patch:
  # cat ../kprobes/list
  c0000000000d074c  k  _do_fork+0xc    [DISABLED][FTRACE]
  c0000000000412b0  k  kretprobe_trampoline+0x0    [OPTIMIZED]

Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc: Introduce a new helper to obtain function entry points
Naveen N. Rao [Wed, 19 Apr 2017 12:52:27 +0000 (18:22 +0530)]
powerpc: Introduce a new helper to obtain function entry points

kprobe_lookup_name() is specific to the kprobe subsystem and may not always
return the function entry point (in a subsequent patch for KPROBES_ON_FTRACE).
For looking up function entry points, introduce a separate helper and use it
in optprobes.c

Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/kprobes: Add support for KPROBES_ON_FTRACE
Naveen N. Rao [Wed, 19 Apr 2017 12:52:26 +0000 (18:22 +0530)]
powerpc/kprobes: Add support for KPROBES_ON_FTRACE

Allow kprobes to be placed on ftrace _mcount() call sites. This optimization
avoids the use of a trap, by riding on ftrace infrastructure.

This depends on HAVE_DYNAMIC_FTRACE_WITH_REGS which depends on MPROFILE_KERNEL,
which is only currently enabled on powerpc64le with newer toolchains.

Based on the x86 code by Masami.

Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/ftrace: Restore LR from pt_regs
Naveen N. Rao [Wed, 19 Apr 2017 12:52:24 +0000 (18:22 +0530)]
powerpc/ftrace: Restore LR from pt_regs

Pass the real LR to the ftrace handler. This is needed for KPROBES_ON_FTRACE for
the pre handlers.

Also, with KPROBES_ON_FTRACE, the link register may be updated by the pre
handlers or by a registed kretprobe. Honor updated LR by restoring it from
pt_regs, rather than from the stack save area.

Live patch and function graph continue to work fine with this change.

Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/kprobes: Blacklist common exception handlers
Naveen N. Rao [Wed, 19 Apr 2017 15:29:52 +0000 (20:59 +0530)]
powerpc/kprobes: Blacklist common exception handlers

Blacklist all the exception common/OOL handlers as the kernel stack is not yet
setup, which means we can't take a trap at this point.

Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/kprobes: Blacklist exception handlers
Naveen N. Rao [Wed, 19 Apr 2017 15:29:51 +0000 (20:59 +0530)]
powerpc/kprobes: Blacklist exception handlers

Introduce __head_end to mark end of the early fixed sections and use it to
blacklist all exception handlers from kprobes.

mpe: We do not need to do anything special for relocatable kernels, where the
exception vectors are split from the main kernel, as the split vectors are
already excluded by the check for kernel_text_address().

Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
[mpe: Move __head_end outside #ifdef 64-bit to unbreak the 32-bit build]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/kprobes: Convert __kprobes to NOKPROBE_SYMBOL()
Naveen N. Rao [Wed, 12 Apr 2017 11:18:51 +0000 (16:48 +0530)]
powerpc/kprobes: Convert __kprobes to NOKPROBE_SYMBOL()

Along similar lines as commit 9326638cbee2 ("kprobes, x86: Use NOKPROBE_SYMBOL()
instead of __kprobes annotation"), convert __kprobes annotation to either
NOKPROBE_SYMBOL() or nokprobe_inline. The latter forces inlining, in which case
the caller needs to be added to NOKPROBE_SYMBOL().

Also:
 - blacklist arch_deref_entry_point(), and
 - convert a few regular inlines to nokprobe_inline in lib/sstep.c

A key benefit is the ability to detect such symbols as being
blacklisted. Before this patch:

  $ cat /sys/kernel/debug/kprobes/blacklist | grep read_mem
  $ perf probe read_mem
  Failed to write event: Invalid argument
    Error: Failed to add events.
  $ dmesg | tail -1
  [ 3736.112815] Could not insert probe at _text+10014968: -22

After patch:
  $ cat /sys/kernel/debug/kprobes/blacklist | grep read_mem
  0xc000000000072b50-0xc000000000072d20 read_mem
  $ perf probe read_mem
  read_mem is blacklisted function, skip it.
  Added new events:
    (null):(null)        (on read_mem)
    probe:read_mem       (on read_mem)

  You can now use it in all perf tools, such as:

  perf record -e probe:read_mem -aR sleep 1

  $ grep " read_mem" /proc/kallsyms
  c000000000072b50 t read_mem
  c0000000005f3b40 t read_mem
  $ cat /sys/kernel/debug/kprobes/list
  c0000000005f3b48  k  read_mem+0x8    [DISABLED]

Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
[mpe: Minor change log formatting, fix up some conflicts]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/ftrace: Move stack setup and teardown code into ftrace_graph_caller()
Naveen N. Rao [Wed, 19 Apr 2017 12:52:23 +0000 (18:22 +0530)]
powerpc/ftrace: Move stack setup and teardown code into ftrace_graph_caller()

Move the stack setup and teardown code into ftrace_graph_caller(). This way, we
don't incur the cost of setting it up unless function graph is enabled for this
function.

Also, remove the extraneous LR restore code after the function graph stub. LR
has previously been restored and neither livepatch_handler() nor
ftrace_graph_caller() return back here.

Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
[mpe: Drop bad change to non-mprofile-kernel version of ftrace_graph_caller]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/kprobes: Remove duplicate saving of MSR
Naveen N. Rao [Wed, 19 Apr 2017 12:51:06 +0000 (18:21 +0530)]
powerpc/kprobes: Remove duplicate saving of MSR

set_current_kprobe() already saves regs->msr into kprobe_saved_msr. Remove the
redundant save.

Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/64s: Simplify POWER9 DD1 idle workaround code
Nicholas Piggin [Wed, 19 Apr 2017 13:05:51 +0000 (23:05 +1000)]
powerpc/64s: Simplify POWER9 DD1 idle workaround code

The idle workaround does not need to load PACATOC, and it does not
need to be called within a nested function that requires LR to be
saved.

Load the PACATOC at entry to the idle wakeup. It does not matter which
PACA this comes from, so it's okay to call before the workaround. Then
apply the workaround to get the right PACA.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/64s: Idle POWER8 avoid full state loss recovery where possible
Nicholas Piggin [Wed, 19 Apr 2017 13:05:50 +0000 (23:05 +1000)]
powerpc/64s: Idle POWER8 avoid full state loss recovery where possible

If not all threads were in winkle, full state loss recovery is not
necessary and can be avoided. A previous patch removed this optimisation
due to some complexity with the implementation. Re-implement it by
counting the number of threads in winkle with the per-core idle state.
Only restore full state loss if all threads were in winkle.

This has a small window of false positives right before threads execute
winkle and just after they wake up, when the winkle count does not
reflect the true number of threads in winkle. This is not a significant
problem in comparison with even the minimum winkle duration. For
correctness, a false positive is not a problem (only false negatives
would be).

Reviewed-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/64s: Idle do not hold reservation longer than required
Nicholas Piggin [Wed, 19 Apr 2017 13:05:49 +0000 (23:05 +1000)]
powerpc/64s: Idle do not hold reservation longer than required

When taking the core idle state lock, grab it immediately like a regular
lock, rather than adding more tests in there. Holding the lock keeps it
stable, so there is no need to do it whole holding the reservation.

Reviewed-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/64s: Expand core idle state bits
Nicholas Piggin [Wed, 19 Apr 2017 13:05:48 +0000 (23:05 +1000)]
powerpc/64s: Expand core idle state bits

In preparation for adding more bits to the core idle state word, move
the lock bit up, and unlock by flipping the lock bit rather than masking
off all but the thread bits.

Add branch hints for atomic operations while we're here.

Reviewed-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/64s: Fix POWER9 machine check handler from stop state
Nicholas Piggin [Wed, 19 Apr 2017 13:05:47 +0000 (23:05 +1000)]
powerpc/64s: Fix POWER9 machine check handler from stop state

The ISA specifies power save wakeup due to a machine check exception can
cause a machine check interrupt (rather than the usual system reset
interrupt).

The machine check handler copes with this by doing low level machine
check recovery without restoring full state from idle, then queues up a
machine check event for logging, then directly executes the same idle
instruction it woke from. This minimises the work done before recovery
is performed.

The problem is that it requires machine specific instructions and
knowledge of the book3s idle code. Currently it only has code to handle
POWER8 idle, so POWER9 crashes when trying to execute the P8 idle
instructions which don't exist in ISAv3.0B.

cpu 0x0: Vector: e40 (Emulation Assist) at [c0000000008f3810]
    pc: c000000000008380: machine_check_handle_early+0x130/0x2f0
    lr: c00000000053a098: stop_loop+0x68/0xd0
    sp: c0000000008f3a90
   msr: 9000000000081001
  current = 0xc0000000008a1080
  paca    = 0xc00000000ffd0000   softe: 0        irq_happened: 0x01
    pid   = 0, comm = swapper/0

Instead of going to sleep after recovery, do the usual idle wakeup and
state restoration by calling into the normal idle wakeup path. This
reuses the normal idle wakeup paths.

Reviewed-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com>
Reviewed-by: Mahesh J Salgaonkar <mahesh@linux.vnet.ibm.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/64s: Use alternative feature patching
Nicholas Piggin [Wed, 19 Apr 2017 13:05:46 +0000 (23:05 +1000)]
powerpc/64s: Use alternative feature patching

This reduces the number of nops for POWER8.

Reviewed-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/64s: Stop using bit in HSPRG0 to test winkle
Nicholas Piggin [Wed, 19 Apr 2017 13:05:45 +0000 (23:05 +1000)]
powerpc/64s: Stop using bit in HSPRG0 to test winkle

The POWER8 idle code has a neat trick of programming the power on engine
to restore a low bit into HSPRG0, so idle wakeup code can test and see
if it has been programmed this way and therefore lost all state. Restore
time can be reduced if winkle has not been reached.

However this messes with our r13 PACA pointer, and requires HSPRG0 to be
written to. It also optimizes the slowest and most uncommon case at the
expense of another SPR write in the common nap state wakeup.

Remove this complexity and assume winkle sleeps always require a state
restore. This speedup could be made entirely contained within the winkle
idle code by counting per-core winkles and setting a thread bitmap when
all have gone to winkle.

Reviewed-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/64s: Move remaining system reset idle code into idle_book3s.S
Nicholas Piggin [Wed, 19 Apr 2017 13:05:44 +0000 (23:05 +1000)]
powerpc/64s: Move remaining system reset idle code into idle_book3s.S

No functional change.

Reviewed-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/64s: Remove unnecessary relocation branch from idle handler
Nicholas Piggin [Wed, 19 Apr 2017 13:05:43 +0000 (23:05 +1000)]
powerpc/64s: Remove unnecessary relocation branch from idle handler

The system reset idle handler system_reset_idle_common is relocated, so
relocation is not required to branch to kvm_start_guest. The superfluous
relocation does not result in incorrect code, but it does not compile
outside of exception-64s.S (with fixed section definitions).

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/mm: Add support for runtime configuration of ASLR limits
Michael Ellerman [Thu, 20 Apr 2017 14:36:20 +0000 (00:36 +1000)]
powerpc/mm: Add support for runtime configuration of ASLR limits

Add powerpc support for mmap_rnd_bits and mmap_rnd_compat_bits, which are two
sysctls that allow a user to configure the number of bits of randomness used for
ASLR.

Because of the way the Kconfig for ARCH_MMAP_RND_BITS is defined, we have to
construct at least the MIN value in Kconfig, vs in a header which would be more
natural. Given that we just go ahead and do it all in Kconfig.

At least according to the code (the documentation makes no mention of it), the
value is defined as the number of bits of randomisation *of the page*, not the
address. This makes some sense, with larger page sizes more of the low bits are
forced to zero, which would reduce the randomisation if we didn't take the
PAGE_SIZE into account. However it does mean the min/max values have to change
depending on the PAGE_SIZE in order to actually limit the amount of address
space consumed by the randomisation.

The result of that is that we have to define the default values based on both
32-bit vs 64-bit, but also the configured PAGE_SIZE. Furthermore now that we
have 128TB address space support on Book3S, we also have to take that into
account.

Finally we can wire up the value in arch_mmap_rnd().

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Bhupesh Sharma <bhsharma@redhat.com>
Tested-by: Bhupesh Sharma <bhsharma@redhat.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
7 years agopowerpc/mm: Wire up ioremap_cache()
Oliver O'Halloran [Tue, 11 Apr 2017 17:42:31 +0000 (03:42 +1000)]
powerpc/mm: Wire up ioremap_cache()

The default implementation of ioremap_cache() is aliased to ioremap().
On powerpc ioremap() creates cache-inhibited mappings by default which
is almost certainly not what you wanted.

Signed-off-by: Oliver O'Halloran <oohall@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/kprobes: Emulate instructions on kprobe handler re-entry
Naveen N. Rao [Wed, 19 Apr 2017 12:51:05 +0000 (18:21 +0530)]
powerpc/kprobes: Emulate instructions on kprobe handler re-entry

On kprobe handler re-entry, try to emulate the instruction rather than single
stepping always.

Acked-by: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com>
Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/kprobes: Factor out code to emulate instruction into a helper
Naveen N. Rao [Wed, 19 Apr 2017 12:51:04 +0000 (18:21 +0530)]
powerpc/kprobes: Factor out code to emulate instruction into a helper

Factor out code to emulate instruction into a try_to_emulate()
helper function. This makes no functional changes.

Acked-by: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com>
Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/kretprobes: Override default function entry offset
Naveen N. Rao [Wed, 8 Mar 2017 08:26:07 +0000 (13:56 +0530)]
powerpc/kretprobes: Override default function entry offset

With ABIv2, we offset 8 bytes into a function to get at the local entry
point.

mpe: NB this function is currently not called, the change to generic code to
call it is being merged via the tip tree.

Acked-by: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com>
Acked-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/kprobes: Fix handling of function offsets on ABIv2
Naveen N. Rao [Wed, 19 Apr 2017 12:51:01 +0000 (18:21 +0530)]
powerpc/kprobes: Fix handling of function offsets on ABIv2

commit 239aeba76409 ("perf powerpc: Fix kprobe and kretprobe handling with
kallsyms on ppc64le") changed how we use the offset field in struct kprobe on
ABIv2. perf now offsets from the global entry point if an offset is specified
and otherwise chooses the local entry point.

Fix the same in kernel for kprobe API users. We do this by extending
kprobe_lookup_name() to accept an additional parameter to indicate the offset
specified with the kprobe registration. If offset is 0, we return the local
function entry and return the global entry point otherwise.

With:
  # cd /sys/kernel/debug/tracing/
  # echo "p _do_fork" >> kprobe_events
  # echo "p _do_fork+0x10" >> kprobe_events

before this patch:
  # cat ../kprobes/list
  c0000000000d0748  k  _do_fork+0x8    [DISABLED]
  c0000000000d0758  k  _do_fork+0x18    [DISABLED]
  c0000000000412b0  k  kretprobe_trampoline+0x0    [OPTIMIZED]

and after:
  # cat ../kprobes/list
  c0000000000d04c8  k  _do_fork+0x8    [DISABLED]
  c0000000000d04d0  k  _do_fork+0x10    [DISABLED]
  c0000000000412b0  k  kretprobe_trampoline+0x0    [OPTIMIZED]

Acked-by: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com>
Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agokprobes: Convert kprobe_lookup_name() to a function
Naveen N. Rao [Wed, 19 Apr 2017 12:51:00 +0000 (18:21 +0530)]
kprobes: Convert kprobe_lookup_name() to a function

The macro is now pretty long and ugly on powerpc. In the light of further
changes needed here, convert it to a __weak variant to be over-ridden with a
nicer looking function.

Suggested-by: Masami Hiramatsu <mhiramat@kernel.org>
Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agokprobes: Skip preparing optprobe if the probe is ftrace-based
Masami Hiramatsu [Wed, 19 Apr 2017 12:52:25 +0000 (18:22 +0530)]
kprobes: Skip preparing optprobe if the probe is ftrace-based

Skip preparing optprobe if the probe is ftrace-based, since anyway, it
must not be optimized (or already optimized by ftrace).

Tested-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/64s: Use relon prolog for EXC_VIRT_OOL_MASKABLE_HV handlers
Nicholas Piggin [Thu, 13 Apr 2017 09:45:48 +0000 (19:45 +1000)]
powerpc/64s: Use relon prolog for EXC_VIRT_OOL_MASKABLE_HV handlers

Hypervisor Virtualization and Directed Hypervisor Doorbell interrupt handlers
use the macro EXC_VIRT_OOL_MASKABLE_HV for their relocation-on handlers, which
calls MASKABLE_RELON_EXCEPTION_HV_OOL, which uses the *real mode* interrupt
prolog. This means we needlessly rfid from virtual mode to virtual mode.

For POWER8 it only affects doorbell IPIs. Context switch microbenchmark between
threads with snooze disabled (which causes IPI) gets about 3% faster, about 370
cycles. Should be more important on POWER9 with global doorbells and HVI for
host interrupts.

Use the RELON variant instead to reduce overhead.

Fixes: 1707dd1613 ("powerpc: Save CFAR before branching in interrupt entry paths")
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
[mpe: Fold some more detail into the change log]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/xive: Fix missing check of rc != OPAL_BUSY
Michael Ellerman [Thu, 20 Apr 2017 04:43:19 +0000 (14:43 +1000)]
powerpc/xive: Fix missing check of rc != OPAL_BUSY

Dan Carpenter noticed that the code in __xive_native_disable_queue() has a for
loop with an unconditional break in the middle, which doesn't make a lot of
sense.

What the code's supposed to do is loop as long as OPAL says it's busy, if we get
any other return code, either success or failure, then we should break the loop.

So add the missing check.

Fixes: 243e25112d06 ("powerpc/xive: Native exploitation of the XIVE interrupt controller")
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/64s: Remove SAO feature from Power9 DD1
Nicholas Piggin [Wed, 19 Apr 2017 02:27:38 +0000 (12:27 +1000)]
powerpc/64s: Remove SAO feature from Power9 DD1

Power9 DD1 does not implement SAO. Although it's not widely used, its presence
or absence is visible to user space via arch_validate_prot() so it's moderately
important that we get the value right.

Fixes: 7dccfbc325bb ("powerpc/book3s: Add a cpu table entry for different POWER9 revs")
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/64s: Remove ICSWX feature from Power9
Nicholas Piggin [Wed, 19 Apr 2017 02:27:37 +0000 (12:27 +1000)]
powerpc/64s: Remove ICSWX feature from Power9

Power9 does not implement the icswx instruction. This CPU feature is not visible
to userspace and is only used in the CONFIG_PPC_ICSWX code, which is generally
not enabled, and can only be triggered by other code using icswx, which should
not happen on Power9 systems in the first place. So impact should be minimal.

Fixes: c3ab300ea5 ("powerpc: Add POWER9 cputable entry")
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/perf: Add Power8 mem_access event to sysfs
Madhavan Srinivasan [Tue, 11 Apr 2017 01:51:10 +0000 (07:21 +0530)]
powerpc/perf: Add Power8 mem_access event to sysfs

Patch add "mem_access" event to sysfs. This as-is not a raw event
supported by Power8 pmu. Instead, it is formed based on
raw event encoding specificed in isa207-common.h.

Primary PMU event used here is PM_MRK_INST_CMPL.
This event tracks only the completed marked instructions.

Random sampling mode (MMCRA[SM]) with Random Instruction
Sampling (RIS) is enabled to mark type of instructions.

With Random sampling in RLS mode with PM_MRK_INST_CMPL event,
the LDST /DATA_SRC fields in SIER identifies the memory
hierarchy level (eg: L1, L2 etc) statisfied a data-cache
miss for a marked instruction.

Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/perf: Support to export SIERs bit in Power9
Madhavan Srinivasan [Tue, 11 Apr 2017 01:51:09 +0000 (07:21 +0530)]
powerpc/perf: Support to export SIERs bit in Power9

Patch to export SIER bits to userspace via
perf_mem_data_src and perf_sample_data struct.

Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/perf: Support to export SIERs bit in Power8
Madhavan Srinivasan [Tue, 11 Apr 2017 01:51:08 +0000 (07:21 +0530)]
powerpc/perf: Support to export SIERs bit in Power8

Patch to export SIER bits to userspace via
perf_mem_data_src and perf_sample_data struct.

Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>
Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/perf: Support to export MMCRA[TEC*] field to userspace
Madhavan Srinivasan [Tue, 11 Apr 2017 01:51:07 +0000 (07:21 +0530)]
powerpc/perf: Support to export MMCRA[TEC*] field to userspace

Threshold feature when used with MMCRA [Threshold Event Counter Event],
MMCRA[Threshold Start event] and MMCRA[Threshold End event] will update
MMCRA[Threashold Event Counter Exponent] and MMCRA[Threshold Event
Counter Multiplier] with the corresponding threshold event count values.
Patch to export MMCRA[TECX/TECM] to userspace in 'weight' field of
struct perf_sample_data.

Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/perf: Export memory hierarchy info to user space
Madhavan Srinivasan [Tue, 11 Apr 2017 01:51:06 +0000 (07:21 +0530)]
powerpc/perf: Export memory hierarchy info to user space

The LDST field and DATA_SRC in SIER identifies the memory hierarchy level
(eg: L1, L2 etc), from which a data-cache miss for a marked instruction
was satisfied. Use the 'perf_mem_data_src' object to export this
hierarchy level to user space.

Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>
Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/perf: Define big-endian version of perf_mem_data_src
Sukadev Bhattiprolu [Tue, 11 Apr 2017 01:51:05 +0000 (07:21 +0530)]
powerpc/perf: Define big-endian version of perf_mem_data_src

perf_mem_data_src is a union that is initialized in the kernel via the ->val
field and accessed by userspace via the mem_xxx bitfields. For this to work
correctly on big endian platforms, we need a big-endian definition for the
bitfields.

Currently on a big endian system, if a user requests PERF_SAMPLE_DATA_SRC (perf
report -d), they will get the default value from perf_sample_data_init(), which
is PERF_MEM_NA. The value for PERF_MEM_NA is constructed using shifts:

  /* TLB access */
  #define PERF_MEM_TLB_NA 0x01 /* not available */
  ...
  #define PERF_MEM_TLB_SHIFT 26

  #define PERF_MEM_S(a, s) \
(((__u64)PERF_MEM_##a##_##s) << PERF_MEM_##a##_SHIFT)

  #define PERF_MEM_NA (PERF_MEM_S(OP, NA)   |\
    PERF_MEM_S(LVL, NA)   |\
    PERF_MEM_S(SNOOP, NA) |\
    PERF_MEM_S(LOCK, NA)  |\
    PERF_MEM_S(TLB, NA))

Which works out as:

  ((0x01 << 0) | (0x01 << 5) | (0x01 << 19) | (0x01 << 24) | (0x01 << 26))

Which means the PERF_MEM_NA value comes out of the kernel as 0x5080021
in CPU endian.

But then in the perf tool, the code uses the bitfields to inspect the value, and
currently the bitfields are defined using little endian ordering.

So eg. in perf_mem__tlb_scnprintf() we see:
  data_src->val = 0x5080021
             op = 0x0
            lvl = 0x0
          snoop = 0x0
           lock = 0x0
           dtlb = 0x0
           rsvd = 0x5080021

Because of the way the perf tool code is written this is still displayed to the
user as "N/A", so there is no bug visible at the UI level.

Currently there are no big endian architectures which export a meaningful
value (ie. other than PERF_MEM_NA), so the extent of the bug on big endian
platforms is that the PERF_MEM_NA value is exported incorrectly as described
above. Subsequent patches will add support on big endian powerpc for populating
the data source value.

This patch does a minimal fix of adding big endian definition of the bitfields
to match the values that are already exported by the kernel on big endian. And
it makes no change on little endian.

Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>
Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/iommu: Do not call PageTransHuge() on tail pages
Alexey Kardashevskiy [Tue, 11 Apr 2017 07:54:57 +0000 (17:54 +1000)]
powerpc/iommu: Do not call PageTransHuge() on tail pages

The CMA pages migration code does not support compound pages at
the moment so it performs few tests before proceeding to actual page
migration.

One of the tests - PageTransHuge() - has VM_BUG_ON_PAGE(PageTail()) as
it is designed to be called on head pages only. Since we also test for
PageCompound(), and it contains PageTail() and PageHead(), we can
simplify the check by leaving just PageCompound() and therefore avoid
possible VM_BUG_ON_PAGE.

Fixes: 2e5bbb5461f1 ("KVM: PPC: Book3S HV: Migrate pinned pages out of CMA")
Cc: stable@vger.kernel.org # v4.9+
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Acked-by: Balbir Singh <bsingharora@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/mmap: Any hint > 128TB searches the full VA space
Aneesh Kumar K.V [Tue, 18 Apr 2017 07:01:27 +0000 (12:31 +0530)]
powerpc/mmap: Any hint > 128TB searches the full VA space

As part of the new large address space support, processes start out life with a
128TB virtual address space. However when calling mmap() a process can pass a
hint address, and if that hint is > 128TB the kernel will use the full 512TB
address space to try and satisfy the mmap() request.

Currently we have a check that the hint is > 128TB and < 512TB (TASK_SIZE),
which was added as an optimisation to avoid updating addr_limit unnecessarily
and also to avoid calling slice_flush_segments() on all CPUs more than
necessary.

However this has the user-visible side effect that an mmap() hint above 512TB
does not search the full address space unless a preceding mmap() used a hint
value > 128TB && < 512TB.

So fix it to treat any hint above 128TB as a hint to search the full address
space, instead of checking the hint against TASK_SIZE, we instead check if the
addr_limit is already == TASK_SIZE.

This also brings the ABI in-line with what is proposed on x86. ie, that a hint
address above 128TB up to and including (2^64)-1 is an indication to search the
full address space.

Fixes: f4ea6dcb08ea2c (powerpc/mm: Enable mappings above 128TB)
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agocxl: Enable PCI device IDs for future IBM CXL adapters
Matthew R. Ochs [Fri, 24 Mar 2017 16:03:19 +0000 (11:03 -0500)]
cxl: Enable PCI device IDs for future IBM CXL adapters

Add support for future IBM Coherent Accelerator (CXL) devices
with an IDs of 0x0623 and 0x0628.

Signed-off-by: Matthew R. Ochs <mrochs@linux.vnet.ibm.com>
Signed-off-by: Uma Krishnan <ukrishn@linux.vnet.ibm.com>
Acked-by: Frederic Barrat <fbarrat@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/64s: Minor fix for MCE TLB flush for radix
Nicholas Piggin [Sun, 16 Apr 2017 14:21:19 +0000 (00:21 +1000)]
powerpc/64s: Minor fix for MCE TLB flush for radix

The TLB flush for radix first flushes TLB for radix configuration,
then flushes for hash configuration. The second flush is unnecessary
but does not affect correctness.

Fixes: 1a472c9dba6b9 ("powerpc/mm/radix: Add tlbflush routines")
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/mm/radix: Use mm->task_size for boundary checking instead of addr_limit
Aneesh Kumar K.V [Thu, 13 Apr 2017 19:18:21 +0000 (00:48 +0530)]
powerpc/mm/radix: Use mm->task_size for boundary checking instead of addr_limit

We don't init addr_limit correctly for 32 bit applications. So default to using
mm->task_size for boundary condition checking. We use addr_limit to only control
free space search. This makes sure that we do the right thing with 32 bit
applications.

We should consolidate the usage of TASK_SIZE/mm->task_size and
mm->context.addr_limit later.

This partially reverts commit fbfef9027c2a7ad (powerpc/mm: Switch some
TASK_SIZE checks to use mm_context addr_limit).

Fixes: fbfef9027c2a ("powerpc/mm: Switch some TASK_SIZE checks to use mm_context addr_limit")
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/64s: Revert setting of LPCR[LPES] on POWER9
Nicholas Piggin [Tue, 18 Apr 2017 19:12:16 +0000 (05:12 +1000)]
powerpc/64s: Revert setting of LPCR[LPES] on POWER9

The XIVE enablement patches included a change to set the LPES (Logical
Partitioning Environment Selector) bit (bit # 3) in LPCR (Logical Partitioning
Control Register) on POWER9 hosts. This bit sets external interrupts to guest
delivery mode, which uses SRR0/1. The host's EE interrupt handler is written to
expect HSRR0/1 (for earlier CPUs). This should be fine because XIVE is
configured not to deliver EEs to the host (Hypervisor Virtulization Interrupt is
used instead) so the EE handler should never be executed.

However a bug in interrupt controller code, hardware, or odd configuration of a
simulator could result in the host getting an EE incorrectly. Keeping the EE
delivery mode matching the host EE handler prevents strange crashes due to using
the wrong exception registers.

KVM will configure the LPCR to set LPES prior to running a guest so that EEs are
delivered to the guest using SRR0/1.

Fixes: 08a1e650cc ("powerpc: Fixup LPCR:PECE and HEIC setting on POWER9")
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
[mpe: Massage change log to avoid referring to LPES0 which is now renamed LPES]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/pseries: Always enable SMP when building pseries
Michael Ellerman [Wed, 5 Apr 2017 02:44:51 +0000 (12:44 +1000)]
powerpc/pseries: Always enable SMP when building pseries

The pseries platform supports Power4 and later CPUs, all of which are
multithreaded and/or multicore.

In practice no one ever builds a SMP=n kernel for these machines. So as
we did for powernv, have the pseries platform imply SMP=y.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/powernv: Always enable SMP when building powernv
Michael Ellerman [Wed, 5 Apr 2017 02:44:50 +0000 (12:44 +1000)]
powerpc/powernv: Always enable SMP when building powernv

The powernv platform supports Power7 and later CPUs, all of which are
multithreaded and multicore.

As such we never build a SMP=n kernel for those machines, other than
possibly for debugging or running in a simulator.

In the debugging case we can get a similar effect by booting with
nr_cpus=1, or there's always the option of building a custom kernel with
SMP hacked out.

For running in simulators the code size reduction from building without
SMP is not particularly important, what matters is the number of
instructions executed. A quick test shows that a SMP=y kernel takes ~6%
more instructions to boot to a shell. Booting with nr_cpus=1 recovers
about half that deficit.

On the flip side, keeping the SMP=n kernel building can be a pain at
times. And although we've mostly kept it building in recent years, no
one is regularly testing that the SMP=n kernel actually boots and works
well on these machines.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc: Allow platforms to force-enable CONFIG_SMP
Michael Ellerman [Wed, 5 Apr 2017 02:44:49 +0000 (12:44 +1000)]
powerpc: Allow platforms to force-enable CONFIG_SMP

Of the 64-bit Book3S platforms, only powermac supports booting on an
actual non-SMP system. The other platforms can be built with SMP
disabled, but it doesn't make a lot of sense given the CPUs they support
are all multicore or multithreaded.

So give platforms the option of forcing SMP=y.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc: Drop include of linux/io.h from asm/io.h
Michael Ellerman [Thu, 13 Apr 2017 13:14:36 +0000 (23:14 +1000)]
powerpc: Drop include of linux/io.h from asm/io.h

Currently powerpc's asm/io.h includes linux/io.h, and linux/io.h
includes asm/io.h.

This can cause problems because depending on which is included first the
order of definitions between the two files will change.

The include of linux/io.h was added back in 2008 in commit b41e5fffe8b8
("[POWERPC] devres: Add devm_ioremap_prot()"). It's not entirely clear
it was needed then, but devm_ioremap_prot() has since been removed
entirely as unused, in dedd24a12fe6 ("powerpc: Remove unused
devm_ioremap_prot()").

So it seems to be unnecessary and can potentially cause problems, so
remove the include of linux/io.h from asm/io.h

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/powernv: POWER9 support for msgsnd/doorbell IPI
Nicholas Piggin [Thu, 13 Apr 2017 10:16:24 +0000 (20:16 +1000)]
powerpc/powernv: POWER9 support for msgsnd/doorbell IPI

POWER9 requires msgsync for receiver-side synchronization, and a DD1
workaround restricts IPIs to core-local.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
[mpe: Drop no longer needed asm feature macro changes]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/64s: Avoid a branch for ppc_msgsnd
Nicholas Piggin [Thu, 13 Apr 2017 10:16:23 +0000 (20:16 +1000)]
powerpc/64s: Avoid a branch for ppc_msgsnd

IPIs are a pretty hot path and we already have the ability to do asm feature
patching, so use it.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
[mpe: Change log detail]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc: Introduce msgsnd/doorbell barrier primitives
Nicholas Piggin [Thu, 13 Apr 2017 10:16:22 +0000 (20:16 +1000)]
powerpc: Introduce msgsnd/doorbell barrier primitives

POWER9 changes requirements and adds new instructions for
synchronization.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc: Change the doorbell IPI calling convention
Nicholas Piggin [Thu, 13 Apr 2017 10:16:21 +0000 (20:16 +1000)]
powerpc: Change the doorbell IPI calling convention

Change the doorbell callers to know about their msgsnd addressing,
rather than have them set a per-cpu target data tag at boot that gets
sent to the cause_ipi functions. The data is only used for doorbell IPI
functions, no other IPI types, so it makes sense to keep that detail
local to doorbell.

Have the platform code understand doorbell IPIs, rather than the
interrupt controller code understand them. Platform code can look at
capabilities it has available and decide which to use.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/64s: Add SCV FSCR bit for ISA v3.0
Nicholas Piggin [Fri, 7 Apr 2017 01:27:44 +0000 (11:27 +1000)]
powerpc/64s: Add SCV FSCR bit for ISA v3.0

Add the bit definition and use it in facility_unavailable_exception() so we can
intelligently report the cause if we take a fault for SCV. This doesn't actually
enable SCV.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
[mpe: Drop whitespace changes to the existing entries, flush out change log]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/64s: Add msgp facility unavailable log string
Nicholas Piggin [Fri, 7 Apr 2017 01:27:43 +0000 (11:27 +1000)]
powerpc/64s: Add msgp facility unavailable log string

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/mm/hash: Don't open code VMALLOC_INDEX
Aneesh Kumar K.V [Wed, 12 Apr 2017 04:40:22 +0000 (10:10 +0530)]
powerpc/mm/hash: Don't open code VMALLOC_INDEX

We have a #define for it, so use it.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agocxl: Add psl9 specific code
Christophe Lombard [Wed, 12 Apr 2017 14:34:07 +0000 (16:34 +0200)]
cxl: Add psl9 specific code

The new Coherent Accelerator Interface Architecture, level 2, for the
IBM POWER9 brings new content and features:
- POWER9 Service Layer
- Registers
- Radix mode
- Process element entry
- Dedicated-Shared Process Programming Model
- Translation Fault Handling
- CAPP
- Memory Context ID
    If a valid mm_struct is found the memory context id is used for each
    transaction associated with the process handle. The PSL uses the
    context ID to find the corresponding process element.

Signed-off-by: Christophe Lombard <clombard@linux.vnet.ibm.com>
Acked-by: Frederic Barrat <fbarrat@linux.vnet.ibm.com>
[mpe: Fixup comment formatting, unsplit long strings]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agocxl: Isolate few psl8 specific calls
Christophe Lombard [Fri, 7 Apr 2017 14:11:58 +0000 (16:11 +0200)]
cxl: Isolate few psl8 specific calls

Point out the specific Coherent Accelerator Interface Architecture,
level 1, registers.
Code and functions specific to PSL8 (CAIA1) must be framed.

Signed-off-by: Christophe Lombard <clombard@linux.vnet.ibm.com>
Acked-by: Frederic Barrat <fbarrat@linux.vnet.ibm.com>
[mpe: Don't split long strings, it makes them hard to grep for]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>