platform/kernel/linux-starfive.git
6 years agopowerpc/wii: Explicitly configure GPIO owner for poweroff pin
Jonathan Neuschäfer [Fri, 9 Feb 2018 12:07:28 +0000 (13:07 +0100)]
powerpc/wii: Explicitly configure GPIO owner for poweroff pin

The Hollywood chipset's GPIO controller has two sets of registers: One
for access by the PowerPC CPU, and one for access by the ARM coprocessor
(but both are accessible from the PPC because the memory firewall
(AHBPROT) is usually disabled when booting Linux, today).

The wii_power_off function currently assumes that the poweroff GPIO pin
is configured for use via the ARM side, but the upcoming GPIO driver
configures all pins for use via the PPC side, breaking poweroff.

Configure the owner register explicitly in wii_power_off to make
wii_power_off work with and without the new GPIO driver.

I think the Wii can be switched to the generic gpio-poweroff driver,
after the GPIO driver is merged.

Signed-off-by: Jonathan Neuschäfer <j.neuschaefer@gmx.net>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/wii: Probe the whole devicetree
Jonathan Neuschäfer [Tue, 6 Feb 2018 12:37:04 +0000 (13:37 +0100)]
powerpc/wii: Probe the whole devicetree

Previously, wii_device_probe would only initialize devices under the
/hollywood node. After this patch, platform devices placed outside of
/hollywood will also be initialized.

The intended usecase for this are devices located outside of the
Hollywood chip, such as GPIO LEDs and GPIO buttons.

Signed-off-by: Jonathan Neuschäfer <j.neuschaefer@gmx.net>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/64e: Fix oops due to deferral of paca allocation
Michael Ellerman [Sat, 31 Mar 2018 09:57:10 +0000 (20:57 +1100)]
powerpc/64e: Fix oops due to deferral of paca allocation

On 64-bit Book3E systems, in setup_tlb_core_data() we reference other
CPUs pacas. But in commit 59f577743d71 ("powerpc/64: Defer paca
allocation until memory topology is discovered") the allocation of
non-boot-CPU pacas was deferred until later in boot.

This leads to an oops:

  CPU maps initialized for 1 thread per core
  Unable to handle kernel paging request for data at address 0x8888888888888918
  Faulting instruction address: 0xc000000000e2f0d0
  Oops: Kernel access of bad area, sig: 11 [#1]
  NIP .setup_tlb_core_data+0xdc/0x160
  Call Trace:
    .setup_tlb_core_data+0x5c/0x160 (unreliable)
    .setup_arch+0x80/0x348
    .start_kernel+0x7c/0x598
    start_here_common+0x1c/0x40

Luckily setup_tlb_core_data() is called immediately prior to
smp_setup_pacas(). So simply switching their order is sufficient to
fix the oops and seems unlikely to have any other unwanted side
effects.

Fixes: 59f577743d71 ("powerpc/64: Defer paca allocation until memory topology is discovered")
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/kvm: Fix guest boot failure on Power9 since DAWR changes
Aneesh Kumar K.V [Fri, 30 Mar 2018 11:57:24 +0000 (17:27 +0530)]
powerpc/kvm: Fix guest boot failure on Power9 since DAWR changes

SLOF checks for 'sc 1' (hypercall) support by issuing a hcall with
H_SET_DABR. Since the recent commit e8ebedbf3131 ("KVM: PPC: Book3S
HV: Return error from h_set_dabr() on POWER9") changed H_SET_DABR to
return H_UNSUPPORTED on Power9, we see guest boot failures, the
symptom is the boot seems to just stop in SLOF, eg:

  SLOF ***************************************************************
  QEMU Starting
   Build Date = Sep 24 2017 12:23:07
   FW Version = buildd@ release 20170724
  <no further output>

SLOF can cope if H_SET_DABR returns H_HARDWARE. So wwitch the return
value to H_HARDWARE instead of H_UNSUPPORTED so that we don't break
the guest boot.

That does mean we return a different error to PowerVM in this case,
but that's probably not a big concern.

Fixes: e8ebedbf3131 ("KVM: PPC: Book3S HV: Return error from h_set_dabr() on POWER9")
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agoMerge branch 'topic/paca' into next
Michael Ellerman [Fri, 30 Mar 2018 13:11:24 +0000 (00:11 +1100)]
Merge branch 'topic/paca' into next

Bring in yet another series that touches KVM code, and might need to
be merged into the kvm-ppc branch to resolve conflicts.

This required some changes in pnv_power9_force_smt4_catch/release()
due to the paca array becomming an array of pointers.

6 years agopowerpc/mm/hash: Don't memset pgd table if not needed
Aneesh Kumar K.V [Mon, 26 Mar 2018 10:04:50 +0000 (15:34 +0530)]
powerpc/mm/hash: Don't memset pgd table if not needed

We need to zero-out pgd table only if we share the slab cache with
pud/pmd level caches. With the support of 4PB, we don't share the slab
cache anymore. Instead of removing the code completely hide it within
an #ifdef. We don't need to do this with any other page table level,
because they all allocate table of double the size and we take of
initializing the first half corrrectly during page table zap.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
[mpe: Consolidate multiple #if / #ifdef into one]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/mm/hash64: Increase the VA range
Aneesh Kumar K.V [Mon, 26 Mar 2018 10:04:49 +0000 (15:34 +0530)]
powerpc/mm/hash64: Increase the VA range

This patch increases the max virtual (effective) address value to 4PB.
With 4K page size config we continue to limit ourself to 64TB.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
[mpe: Keep the H_PGTABLE_RANGE test, update it to work]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/mm: Add support for handling > 512TB address in SLB miss
Aneesh Kumar K.V [Mon, 26 Mar 2018 10:04:48 +0000 (15:34 +0530)]
powerpc/mm: Add support for handling > 512TB address in SLB miss

For addresses above 512TB we allocate additional mmu contexts. To make
it all easy, addresses above 512TB are handled with IR/DR=1 and with
stack frame setup.

The mmu_context_t is also updated to track the new extended_ids. To
support upto 4PB we need a total 8 contexts.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
[mpe: Minor formatting tweaks and comment wording, switch BUG to WARN
      in get_ea_context().]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/mm/slice: Consolidate return path in slice_get_unmapped_area()
Aneesh Kumar K.V [Mon, 26 Mar 2018 10:04:47 +0000 (15:34 +0530)]
powerpc/mm/slice: Consolidate return path in slice_get_unmapped_area()

In a following patch, on finding a free area we will need to do
allocatinon of extra contexts as needed. Consolidating the return path
for slice_get_unmapped_area() will make that easier.

Split into a separate patch to make review easy.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/mm/keys: Move pte bits to correct headers
Aneesh Kumar K.V [Wed, 7 Mar 2018 13:36:44 +0000 (19:06 +0530)]
powerpc/mm/keys: Move pte bits to correct headers

Memory keys are supported only with hash translation mode. Instead of
using #ifdef in generic code move the key related pte bits to
respective headers

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/xive: Fix wrong xmon output caused by typo
Frederic Barrat [Wed, 14 Mar 2018 17:01:14 +0000 (18:01 +0100)]
powerpc/xive: Fix wrong xmon output caused by typo

Signed-off-by: Frederic Barrat <fbarrat@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agodrivers: macintosh: rack-meter: really fix bogus memsets
Aaro Koskinen [Fri, 16 Mar 2018 20:17:28 +0000 (22:17 +0200)]
drivers: macintosh: rack-meter: really fix bogus memsets

We should zero an array using sizeof instead of number of elements.

Fixes the following compiler (GCC 7.3.0) warnings:

drivers/macintosh/rack-meter.c: In function 'rackmeter_do_pause':
drivers/macintosh/rack-meter.c:157:2: warning: 'memset' used with length equal to number of elements without multiplication by element size [-Wmemset-elt-size]
drivers/macintosh/rack-meter.c:158:2: warning: 'memset' used with length equal to number of elements without multiplication by element size [-Wmemset-elt-size]

Fixes: 4f7bef7a9f69 ("drivers: macintosh: rack-meter: fix bogus memsets")
Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Aaro Koskinen <aaro.koskinen@iki.fi>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/64: Fix smp_wmb barrier definition use use lwsync consistently
Nicholas Piggin [Thu, 22 Mar 2018 10:41:46 +0000 (20:41 +1000)]
powerpc/64: Fix smp_wmb barrier definition use use lwsync consistently

asm/barrier.h is not always included after asm/synch.h, which meant
it was missing __SUBARCH_HAS_LWSYNC, so in some files smp_wmb() would
be eieio when it should be lwsync. kernel/time/hrtimer.c is one case.

__SUBARCH_HAS_LWSYNC is only used in one place, so just fold it in
to where it's used. Previously with my small simulator config, 377
instances of eieio in the tree. After this patch there are 55.

Fixes: 46d075be585e ("powerpc: Optimise smp_wmb")
Cc: stable@vger.kernel.org # v2.6.29+
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/4xx: Fix error return code in ppc4xx_msi_probe()
Wei Yongjun [Mon, 26 Mar 2018 14:43:09 +0000 (14:43 +0000)]
powerpc/4xx: Fix error return code in ppc4xx_msi_probe()

Fix to return a negative error code from the error handling
case instead of 0, as done elsewhere in this function.

Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com>
[mpe: Add missing ';' to make it compile]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/mm: Fix thread_pkey_regs_init()
Ram Pai [Tue, 27 Mar 2018 02:36:54 +0000 (19:36 -0700)]
powerpc/mm: Fix thread_pkey_regs_init()

thread_pkey_regs_init() initializes the pkey related registers
instead of initializing the fields in the task structures.  Fortunately
those key related registers are re-set to zero when the task
gets scheduled on the cpu. However its good to fix this glaringly
visible error.

Fixes: 06bb53b33804 ("powerpc: store and restore the pkey state across context switches")
Signed-off-by: Ram Pai <linuxram@us.ibm.com>
Signed-off-by: Thiago Jung Bauermann <bauerman@linux.vnet.ibm.com>
Acked-by: Balbir Singh <bsingharora@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/kprobes: Fix call trace due to incorrect preempt count
Naveen N. Rao [Wed, 17 Jan 2018 12:22:24 +0000 (17:52 +0530)]
powerpc/kprobes: Fix call trace due to incorrect preempt count

Michael Ellerman reported the following call trace when running
ftracetest:

  BUG: using __this_cpu_write() in preemptible [00000000] code: ftracetest/6178
  caller is opt_pre_handler+0xc4/0x110
  CPU: 1 PID: 6178 Comm: ftracetest Not tainted 4.15.0-rc7-gcc6x-gb2cd1df #1
  Call Trace:
  [c0000000f9ec39c0] [c000000000ac4304] dump_stack+0xb4/0x100 (unreliable)
  [c0000000f9ec3a00] [c00000000061159c] check_preemption_disabled+0x15c/0x170
  [c0000000f9ec3a90] [c000000000217e84] opt_pre_handler+0xc4/0x110
  [c0000000f9ec3af0] [c00000000004cf68] optimized_callback+0x148/0x170
  [c0000000f9ec3b40] [c00000000004d954] optinsn_slot+0xec/0x10000
  [c0000000f9ec3e30] [c00000000004bae0] kretprobe_trampoline+0x0/0x10

This is showing up since OPTPROBES is now enabled with CONFIG_PREEMPT.

trampoline_probe_handler() considers itself to be a special kprobe
handler for kretprobes. In doing so, it expects to be called from
kprobe_handler() on a trap, and re-enables preemption before returning a
non-zero return value so as to suppress any subsequent processing of the
trap by the kprobe_handler().

However, with optprobes, we don't deal with special handlers (we ignore
the return code) and just try to re-enable preemption causing the above
trace.

To address this, modify trampoline_probe_handler() to not be special.
The only additional processing done in kprobe_handler() is to emulate
the instruction (in this case, a 'nop'). We adjust the value of
regs->nip for the purpose and delegate the job of re-enabling
preemption and resetting current kprobe to the probe handlers
(kprobe_handler() or optimized_callback()).

Fixes: 8a2d71a3f273 ("powerpc/kprobes: Disable preemption before invoking probe handler for optprobes")
Cc: stable@vger.kernel.org # v4.15+
Reported-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Acked-by: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agomacintosh/adb: Use C99 initializers for struct adb_driver instances
Finn Thain [Thu, 29 Mar 2018 00:36:04 +0000 (11:36 +1100)]
macintosh/adb: Use C99 initializers for struct adb_driver instances

No change to object files.

Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Finn Thain <fthain@telegraphics.com.au>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/powernv: Handle unknown OPAL errors in opal_nvram_write()
Nicholas Piggin [Mon, 26 Mar 2018 15:02:33 +0000 (01:02 +1000)]
powerpc/powernv: Handle unknown OPAL errors in opal_nvram_write()

opal_nvram_write currently just assumes success if it encounters an
error other than OPAL_BUSY or OPAL_BUSY_EVENT. Have it return -EIO
on other errors instead.

Fixes: 628daa8d5abf ("powerpc/powernv: Add RTC and NVRAM support plus RTAS fallbacks")
Cc: stable@vger.kernel.org # v3.2+
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Vasant Hegde <hegdevasant@linux.vnet.ibm.com>
Acked-by: Stewart Smith <stewart@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/pseries: Fix clearing of security feature flags
Mauricio Faria de Oliveira [Thu, 29 Mar 2018 18:32:11 +0000 (15:32 -0300)]
powerpc/pseries: Fix clearing of security feature flags

The H_CPU_BEHAV_* flags should be checked for in the 'behaviour' field
of 'struct h_cpu_char_result' -- 'character' is for H_CPU_CHAR_*
flags.

Found by playing around with QEMU's implementation of the hypercall:

  H_CPU_CHAR=0xf000000000000000
  H_CPU_BEHAV=0x0000000000000000

  This clears H_CPU_BEHAV_FAVOUR_SECURITY and H_CPU_BEHAV_L1D_FLUSH_PR
  so pseries_setup_rfi_flush() disables 'rfi_flush'; and it also
  clears H_CPU_CHAR_L1D_THREAD_PRIV flag. So there is no RFI flush
  mitigation at all for cpu_show_meltdown() to report; but currently
  it does:

  Original kernel:

    # cat /sys/devices/system/cpu/vulnerabilities/meltdown
    Mitigation: RFI Flush

  Patched kernel:

    # cat /sys/devices/system/cpu/vulnerabilities/meltdown
    Not affected

  H_CPU_CHAR=0x0000000000000000
  H_CPU_BEHAV=0xf000000000000000

  This sets H_CPU_BEHAV_BNDS_CHK_SPEC_BAR so cpu_show_spectre_v1() should
  report vulnerable; but currently it doesn't:

  Original kernel:

    # cat /sys/devices/system/cpu/vulnerabilities/spectre_v1
    Not affected

  Patched kernel:

    # cat /sys/devices/system/cpu/vulnerabilities/spectre_v1
    Vulnerable

Brown-paper-bag-by: Michael Ellerman <mpe@ellerman.id.au>
Fixes: f636c14790ea ("powerpc/pseries: Set or clear security feature flags")
Signed-off-by: Mauricio Faria de Oliveira <mauricfo@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/mm: Pass node id into create_section_mapping
Nicholas Piggin [Tue, 13 Feb 2018 15:08:22 +0000 (01:08 +1000)]
powerpc/mm: Pass node id into create_section_mapping

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
[mpe: Move __map_kernel_page_nid() inside #ifdef SPARSEMEM_VMEMMAP]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/64s/radix: Allocate kernel page tables node-local if possible
Nicholas Piggin [Tue, 13 Feb 2018 15:08:24 +0000 (01:08 +1000)]
powerpc/64s/radix: Allocate kernel page tables node-local if possible

Try to allocate kernel page tables for direct mapping and vmemmap
according to the node of the memory they will map. The node is not
available for the linear map in early boot, so use range allocation
to allocate the page tables from the region they map, which is
effectively node-local.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
[mpe: Fix build error in radix__create_section_mapping()]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/64s/radix: Split early page table mapping to its own function
Nicholas Piggin [Tue, 13 Feb 2018 15:08:23 +0000 (01:08 +1000)]
powerpc/64s/radix: Split early page table mapping to its own function

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/64: Allocate per-cpu stacks node-local if possible
Nicholas Piggin [Tue, 13 Feb 2018 15:08:21 +0000 (01:08 +1000)]
powerpc/64: Allocate per-cpu stacks node-local if possible

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/64: Allocate pacas per node
Nicholas Piggin [Tue, 13 Feb 2018 15:08:20 +0000 (01:08 +1000)]
powerpc/64: Allocate pacas per node

Per-node allocations are possible on 64s with radix that does
not have the bolted SLB limitation.

Hash would be able to do the same if all CPUs had the bottom of
their node-local memory bolted as well. This is left as an
exercise for the reader.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
[mpe: Add dummy definition of boot_cpuid for !SMP]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/64: Defer paca allocation until memory topology is discovered
Nicholas Piggin [Tue, 13 Feb 2018 15:08:19 +0000 (01:08 +1000)]
powerpc/64: Defer paca allocation until memory topology is discovered

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
[mpe: Rename the dummy allocate_pacas() to fix 32-bit build]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/setup: Add cpu_to_phys_id array
Nicholas Piggin [Tue, 13 Feb 2018 15:08:18 +0000 (01:08 +1000)]
powerpc/setup: Add cpu_to_phys_id array

Build an array that finds hardware CPU number from logical CPU
number in firmware CPU discovery. Use that rather than setting
paca of other CPUs directly, to begin with. Subsequent patch will
not have pacas allocated at this point.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
[mpe: Fix SMP=n build by adding #ifdef in arch_match_cpu_phys_id()]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/64: move default SPR recording
Nicholas Piggin [Tue, 13 Feb 2018 15:08:17 +0000 (01:08 +1000)]
powerpc/64: move default SPR recording

Move this into the early setup code, and don't iterate over CPU masks.
We don't want to call into sysfs so early from setup, and a future patch
won't initialize CPU masks by the time this is called.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
[mpe: Fold in incremental fix from Nick for DSCR handling]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/mm/numa: move numa topology discovery earlier
Nicholas Piggin [Tue, 13 Feb 2018 15:08:16 +0000 (01:08 +1000)]
powerpc/mm/numa: move numa topology discovery earlier

Split sparsemem initialisation from basic numa topology discovery.
Move the parsing earlier in boot, before pacas are allocated.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agomm: make memblock_alloc_base_nid() non-static
Nicholas Piggin [Tue, 13 Feb 2018 15:08:15 +0000 (01:08 +1000)]
mm: make memblock_alloc_base_nid() non-static

This will be used by powerpc to allocate per-cpu stacks and other
data structures node-local where possible.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
[mpe: Drop stray change to memblock_alloc_range() as noticed by akpm]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/64s: Allocate slb_shadow structures individually
Nicholas Piggin [Tue, 13 Feb 2018 15:08:14 +0000 (01:08 +1000)]
powerpc/64s: Allocate slb_shadow structures individually

slb_shadow structures are avoided for radix environment.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/64s: Allocate LPPACAs individually
Nicholas Piggin [Tue, 13 Feb 2018 15:08:13 +0000 (01:08 +1000)]
powerpc/64s: Allocate LPPACAs individually

We no longer allocate lppacas in an array, so this patch removes the
1kB static alignment for the structure, and enforces the PAPR
alignment requirements at allocation time. We can not reduce the 1kB
allocation size however, due to existing KVM hypervisors.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/64: Use array of paca pointers and allocate pacas individually
Nicholas Piggin [Tue, 13 Feb 2018 15:08:12 +0000 (01:08 +1000)]
powerpc/64: Use array of paca pointers and allocate pacas individually

Change the paca array into an array of pointers to pacas. Allocate
pacas individually.

This allows flexibility in where the PACAs are allocated. Future work
will allocate them node-local. Platforms that don't have address limits
on PACAs would be able to defer PACA allocations until later in boot
rather than allocate all possible ones up-front then freeing unused.

This is slightly more overhead (one additional indirection) for cross
CPU paca references, but those aren't too common.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/64s: Do not allocate lppaca if we are not virtualized
Nicholas Piggin [Tue, 13 Feb 2018 15:08:11 +0000 (01:08 +1000)]
powerpc/64s: Do not allocate lppaca if we are not virtualized

The "lppaca" is a structure registered with the hypervisor. This is
unnecessary when running on non-virtualised platforms. One field from
the lppaca (pmcregs_in_use) is also used by the host, so move the host
part out into the paca (lppaca field is still updated in
guest mode).

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
[mpe: Fix non-pseries build with some #ifdefs]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/mpic: Check if cpu_possible() in mpic_physmask()
Michael Ellerman [Fri, 30 Mar 2018 12:27:25 +0000 (23:27 +1100)]
powerpc/mpic: Check if cpu_possible() in mpic_physmask()

In mpic_physmask() we loop over all CPUs up to 32, then get the hard
SMP processor id of that CPU.

Currently that's possibly walking off the end of the paca array, but
in a future patch we will change the paca array to be an array of
pointers, and in that case we will get a NULL for missing CPUs and
oops. eg:

  Unable to handle kernel paging request for data at address 0x88888888888888b8
  Faulting instruction address: 0xc00000000004e380
  Oops: Kernel access of bad area, sig: 11 [#1]
  ...
  NIP .mpic_set_affinity+0x60/0x1a0
  LR  .irq_do_set_affinity+0x48/0x100

Fix it by checking the CPU is possible, this also fixes the code if
there are gaps in the CPU numbering which probably never happens on
mpic systems but who knows.

Debugged-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agoMerge branch 'fixes' into next
Michael Ellerman [Wed, 28 Mar 2018 11:59:50 +0000 (22:59 +1100)]
Merge branch 'fixes' into next

Merge our fixes branch from the 4.16 cycle.

There were a number of important fixes merged, in particular some Power9
workarounds that we want in next for testing purposes. There's also been
some conflicting changes in the CPU features code which are best merged
and tested before going upstream.

6 years agoMerge branch 'topic/ppc-kvm' into next
Michael Ellerman [Tue, 27 Mar 2018 12:55:49 +0000 (23:55 +1100)]
Merge branch 'topic/ppc-kvm' into next

Merge the DAWR series, which touches arch code and KVM code and may need
to be merged into the kvm-ppc tree.

6 years agopowerpc: Disable DAWR in the base POWER9 CPU features
Michael Neuling [Tue, 27 Mar 2018 04:37:24 +0000 (15:37 +1100)]
powerpc: Disable DAWR in the base POWER9 CPU features

Using the DAWR on POWER9 can cause xstops, hence we need to disable
it.

Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc: Disable DAWR on POWER9 via CPU feature quirk
Michael Neuling [Tue, 27 Mar 2018 04:37:23 +0000 (15:37 +1100)]
powerpc: Disable DAWR on POWER9 via CPU feature quirk

This disables the DAWR on all POWER9 CPUs via cpu feature quirk.

Using the DAWR on POWER9 can cause xstops, hence we need to disable
it.

Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agoKVM: PPC: Book3S HV: Handle migration with POWER9 disabled DAWR
Michael Neuling [Tue, 27 Mar 2018 04:37:22 +0000 (15:37 +1100)]
KVM: PPC: Book3S HV: Handle migration with POWER9 disabled DAWR

POWER9 with the DAWR disabled causes problems for partition
migration. Either we have to fail the migration (since we lose the
DAWR) or we silently drop the DAWR and allow the migration to pass.

This patch does the latter and allows the migration to pass (at the
cost of silently losing the DAWR). This is not ideal but hopefully the
best overall solution. This approach has been acked by Paulus.

With this patch kvmppc_set_one_reg() will store the DAWR in the vcpu
but won't actually set it on POWER9 hardware.

Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agoKVM: PPC: Book3S HV: Return error from h_set_dabr() on POWER9
Michael Neuling [Tue, 27 Mar 2018 04:37:21 +0000 (15:37 +1100)]
KVM: PPC: Book3S HV: Return error from h_set_dabr() on POWER9

POWER7 compat mode guests can use h_set_dabr on POWER9. POWER9 should
use the DAWR but since it's disabled there we can't.

This returns H_UNSUPPORTED on a h_set_dabr() on POWER9 where the DAWR
is disabled.

Current Linux guests ignore this error, so they will silently not get
the DAWR (sigh). The same error code is being used by POWERVM in this
case.

Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agoKVM: PPC: Book3S HV: Return error from h_set_mode(SET_DAWR) on POWER9
Michael Neuling [Tue, 27 Mar 2018 04:37:20 +0000 (15:37 +1100)]
KVM: PPC: Book3S HV: Return error from h_set_mode(SET_DAWR) on POWER9

Return H_P2 on a h_set_mode(SET_DAWR) on POWER9 where the DAWR is
disabled.

Current Linux guests ignore this error, so they will silently not get
the DAWR (sigh). The same error code is being used by POWERVM in this
case.

Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc: Update xmon to use ppc_breakpoint_available()
Michael Neuling [Tue, 27 Mar 2018 04:37:19 +0000 (15:37 +1100)]
powerpc: Update xmon to use ppc_breakpoint_available()

The 'bd' command will now print an error and not set the breakpoint on
P9.

Signed-off-by: Michael Neuling <mikey@neuling.org>
[mpe: Unsplit quoted string]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc: Update ptrace to use ppc_breakpoint_available()
Michael Neuling [Tue, 27 Mar 2018 04:37:18 +0000 (15:37 +1100)]
powerpc: Update ptrace to use ppc_breakpoint_available()

This updates the ptrace code to use ppc_breakpoint_available().

We now advertise via PPC_PTRACE_GETHWDBGINFO zero breakpoints when the
DAWR is missing (ie. POWER9). This results in GDB falling back to
software emulation of the breakpoint (which is slow).

For the features advertised by PPC_PTRACE_GETHWDBGINFO, we keep
advertising DAWR as if we don't GDB assumes 1 breakpoint irrespective
of the number of breakpoints advertised. GDB then fails later when
trying to set this one breakpoint.

Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc: Add ppc_breakpoint_available()
Michael Neuling [Tue, 27 Mar 2018 04:37:17 +0000 (15:37 +1100)]
powerpc: Add ppc_breakpoint_available()

Add ppc_breakpoint_available() to determine if a breakpoint is
available currently via the DAWR or DABR.

Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/eeh: Add eeh_state_active() helper
Sam Bobroff [Mon, 19 Mar 2018 02:49:23 +0000 (13:49 +1100)]
powerpc/eeh: Add eeh_state_active() helper

Checking for a "fully active" device state requires testing two flag
bits, which is open coded in several places, so add a function to do
it.

Signed-off-by: Sam Bobroff <sam.bobroff@au1.ibm.com>
Reviewed-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/eeh: Factor out common code eeh_reset_device()
Sam Bobroff [Wed, 21 Mar 2018 02:06:40 +0000 (13:06 +1100)]
powerpc/eeh: Factor out common code eeh_reset_device()

The caller will always pass NULL for 'rmv_data' when
'eeh_aware_driver' is true, so the first two calls to
eeh_pe_dev_traverse() can be combined without changing behaviour as
can the two arms of the final 'if' block.

This should not change behaviour.

Signed-off-by: Sam Bobroff <sam.bobroff@au1.ibm.com>
Reviewed-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/eeh: Remove always-true tests in eeh_reset_device()
Sam Bobroff [Mon, 19 Mar 2018 02:49:04 +0000 (13:49 +1100)]
powerpc/eeh: Remove always-true tests in eeh_reset_device()

eeh_reset_device() tests the value of 'bus' more than once but the
only caller, eeh_handle_normal_device() does this test itself and will
never pass NULL.

So, remove the dead tests.

This should not change behaviour.

Signed-off-by: Sam Bobroff <sam.bobroff@au1.ibm.com>
Reviewed-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/eeh: Clarify arguments to eeh_reset_device()
Sam Bobroff [Mon, 19 Mar 2018 02:48:55 +0000 (13:48 +1100)]
powerpc/eeh: Clarify arguments to eeh_reset_device()

It is currently difficult to understand the behaviour of
eeh_reset_device() due to the way it's parameters are used. In
particular, when 'bus' is NULL, it's value is still necessary so the
same value is looked up again locally under a different name
('frozen_bus') but behaviour is changed.

To clarify this, add a new parameter 'driver_eeh_aware', and have the
caller set it when it would have passed NULL for 'bus' and always pass
a value for 'bus'. Then change any test that was on 'bus' to one on
'!driver_eeh_aware' and replace uses of 'frozen_bus' with 'bus'.

Also update the function's comment.

This should not change behaviour.

Signed-off-by: Sam Bobroff <sam.bobroff@au1.ibm.com>
Reviewed-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/eeh: Rename frozen_bus to bus in eeh_handle_normal_event()
Sam Bobroff [Mon, 19 Mar 2018 02:47:02 +0000 (13:47 +1100)]
powerpc/eeh: Rename frozen_bus to bus in eeh_handle_normal_event()

The name "frozen_bus" is misleading: it's not necessarily frozen, it's
just the PE's PCI bus.

Signed-off-by: Sam Bobroff <sam.bobroff@au1.ibm.com>
Reviewed-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/eeh: Remove misleading test in eeh_handle_normal_event()
Sam Bobroff [Mon, 19 Mar 2018 02:46:51 +0000 (13:46 +1100)]
powerpc/eeh: Remove misleading test in eeh_handle_normal_event()

Remove a test that checks if "frozen_bus" is NULL, because it cannot
have changed since it was tested at the start of the function and so
must be true here.

Signed-off-by: Sam Bobroff <sam.bobroff@au1.ibm.com>
Reviewed-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/eeh: Fix misleading comment in __eeh_addr_cache_get_device()
Sam Bobroff [Mon, 19 Mar 2018 02:46:40 +0000 (13:46 +1100)]
powerpc/eeh: Fix misleading comment in __eeh_addr_cache_get_device()

Commit "0ba178888b05 powerpc/eeh: Remove reference to PCI device"
removed a call to pci_dev_get() from __eeh_addr_cache_get_device() but
did not update the comment to match.

Signed-off-by: Sam Bobroff <sam.bobroff@au1.ibm.com>
Reviewed-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/eeh: Manage EEH_PE_RECOVERING inside eeh_handle_normal_event()
Sam Bobroff [Mon, 19 Mar 2018 02:46:30 +0000 (13:46 +1100)]
powerpc/eeh: Manage EEH_PE_RECOVERING inside eeh_handle_normal_event()

Currently the EEH_PE_RECOVERING flag for a PE is managed by both the
caller and callee of eeh_handle_normal_event() (among other places not
considered here). This is complicated by the fact that the PE may
or may not have been invalidated by the call.

So move the callee's handling into eeh_handle_normal_event(), which
clarifies it and allows the return type to be changed to void (because
it no longer needs to indicate at the PE has been invalidated).

This should not change behaviour except in eeh_event_handler() where
it was previously possible to cause eeh_pe_state_clear() to be called
on an invalid PE, which is now avoided.

Signed-off-by: Sam Bobroff <sam.bobroff@au1.ibm.com>
Reviewed-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/eeh: Remove eeh_handle_event()
Sam Bobroff [Mon, 19 Mar 2018 02:46:20 +0000 (13:46 +1100)]
powerpc/eeh: Remove eeh_handle_event()

The function eeh_handle_event(pe) does nothing other than switching
between calling eeh_handle_normal_event(pe) and
eeh_handle_special_event(). However it is only called in two places,
one where pe can't be NULL and the other where it must be NULL (see
eeh_event_handler()) so it does nothing but obscure the flow of
control.

So, remove it.

Signed-off-by: Sam Bobroff <sam.bobroff@au1.ibm.com>
Reviewed-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/powernv/npu: Do not try invalidating 32bit table when 64bit table is enabled
Alexey Kardashevskiy [Tue, 13 Feb 2018 05:51:35 +0000 (16:51 +1100)]
powerpc/powernv/npu: Do not try invalidating 32bit table when 64bit table is enabled

GPUs and the corresponding NVLink bridges get different PEs as they
have separate translation validation entries (TVEs). We put these PEs
to the same IOMMU group so they cannot be passed through separately.
So the iommu_table_group_ops::set_window/unset_window for GPUs do set
tables to the NPU PEs as well which means that iommu_table's list of
attached PEs (iommu_table_group_link) has both GPU and NPU PEs linked.
This list is used for TCE cache invalidation.

The problem is that NPU PE has just a single TVE and can be programmed
to point to 32bit or 64bit windows while GPU PE has two (as any other
PCI device). So we end up having an 32bit iommu_table struct linked to
both PEs even though only the 64bit TCE table cache can be invalidated
on NPU. And a relatively recent skiboot detects this and prints
errors.

This changes GPU's iommu_table_group_ops::set_window/unset_window to
make sure that NPU PE is only linked to the table actually used by the
hardware. If there are two tables used by an IOMMU group, the NPU PE
will use the last programmed one which with the current use scenarios
is expected to be a 64bit one.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/mm: Fix typo in comments
Alexey Kardashevskiy [Thu, 1 Feb 2018 05:07:25 +0000 (16:07 +1100)]
powerpc/mm: Fix typo in comments

Fixes: 912cc87a6 "powerpc/mm/radix: Add LPID based tlb flush helpers"
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/lpar/debug: Initialize flags before printing debug message
Alexey Kardashevskiy [Tue, 9 Jan 2018 05:52:14 +0000 (16:52 +1100)]
powerpc/lpar/debug: Initialize flags before printing debug message

With enabled DEBUG, there is a compile error:
"error: ‘flags’ is used uninitialized in this function".

This moves pr_devel() little further where @flags are initialized.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/init: Do not advertise radix during client-architecture-support
Alexey Kardashevskiy [Tue, 9 Jan 2018 05:45:20 +0000 (16:45 +1100)]
powerpc/init: Do not advertise radix during client-architecture-support

Currently the pseries kernel advertises radix MMU support even if
the actual support is disabled via the CONFIG_PPC_RADIX_MMU option.

This adds a check for CONFIG_PPC_RADIX_MMU to avoid advertising radix
to the hypervisor.

Suggested-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/mm: Fix section mismatch warning in stop_machine_change_mapping()
Mauricio Faria de Oliveira [Fri, 9 Mar 2018 20:45:58 +0000 (17:45 -0300)]
powerpc/mm: Fix section mismatch warning in stop_machine_change_mapping()

Fix the warning messages for stop_machine_change_mapping(), and a number
of other affected functions in its call chain.

All modified functions are under CONFIG_MEMORY_HOTPLUG, so __meminit
is okay (keeps them / does not discard them).

Boot-tested on powernv/power9/radix-mmu and pseries/power8/hash-mmu.

    $ make -j$(nproc) CONFIG_DEBUG_SECTION_MISMATCH=y vmlinux
    ...
      MODPOST vmlinux.o
    WARNING: vmlinux.o(.text+0x6b130): Section mismatch in reference from the function stop_machine_change_mapping() to the function .meminit.text:create_physical_mapping()
    The function stop_machine_change_mapping() references
    the function __meminit create_physical_mapping().
    This is often because stop_machine_change_mapping lacks a __meminit
    annotation or the annotation of create_physical_mapping is wrong.

    WARNING: vmlinux.o(.text+0x6b13c): Section mismatch in reference from the function stop_machine_change_mapping() to the function .meminit.text:create_physical_mapping()
    The function stop_machine_change_mapping() references
    the function __meminit create_physical_mapping().
    This is often because stop_machine_change_mapping lacks a __meminit
    annotation or the annotation of create_physical_mapping is wrong.
    ...

Signed-off-by: Mauricio Faria de Oliveira <mauricfo@linux.vnet.ibm.com>
Acked-by: Balbir Singh <bsingharora@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/64s: Wire up cpu_show_spectre_v2()
Michael Ellerman [Tue, 27 Mar 2018 12:01:53 +0000 (23:01 +1100)]
powerpc/64s: Wire up cpu_show_spectre_v2()

Add a definition for cpu_show_spectre_v2() to override the generic
version. This has several permuations, though in practice some may not
occur we cater for any combination.

The most verbose is:

  Mitigation: Indirect branch serialisation (kernel only), Indirect
  branch cache disabled, ori31 speculation barrier enabled

We don't treat the ori31 speculation barrier as a mitigation on its
own, because it has to be *used* by code in order to be a mitigation
and we don't know if userspace is doing that. So if that's all we see
we say:

  Vulnerable, ori31 speculation barrier enabled

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/64s: Wire up cpu_show_spectre_v1()
Michael Ellerman [Tue, 27 Mar 2018 12:01:52 +0000 (23:01 +1100)]
powerpc/64s: Wire up cpu_show_spectre_v1()

Add a definition for cpu_show_spectre_v1() to override the generic
version. Currently this just prints "Not affected" or "Vulnerable"
based on the firmware flag.

Although the kernel does have array_index_nospec() in a few places, we
haven't yet audited all the powerpc code to see where it's necessary,
so for now we don't list that as a mitigation.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/pseries: Use the security flags in pseries_setup_rfi_flush()
Michael Ellerman [Tue, 27 Mar 2018 12:01:51 +0000 (23:01 +1100)]
powerpc/pseries: Use the security flags in pseries_setup_rfi_flush()

Now that we have the security flags we can simplify the code in
pseries_setup_rfi_flush() because the security flags have pessimistic
defaults.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/powernv: Use the security flags in pnv_setup_rfi_flush()
Michael Ellerman [Tue, 27 Mar 2018 12:01:50 +0000 (23:01 +1100)]
powerpc/powernv: Use the security flags in pnv_setup_rfi_flush()

Now that we have the security flags we can significantly simplify the
code in pnv_setup_rfi_flush(), because we can use the flags instead of
checking device tree properties and because the security flags have
pessimistic defaults.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/64s: Enhance the information in cpu_show_meltdown()
Michael Ellerman [Tue, 27 Mar 2018 12:01:49 +0000 (23:01 +1100)]
powerpc/64s: Enhance the information in cpu_show_meltdown()

Now that we have the security feature flags we can make the
information displayed in the "meltdown" file more informative.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/64s: Move cpu_show_meltdown()
Michael Ellerman [Tue, 27 Mar 2018 12:01:48 +0000 (23:01 +1100)]
powerpc/64s: Move cpu_show_meltdown()

This landed in setup_64.c for no good reason other than we had nowhere
else to put it. Now that we have a security-related file, that is a
better place for it so move it.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/powernv: Set or clear security feature flags
Michael Ellerman [Tue, 27 Mar 2018 12:01:47 +0000 (23:01 +1100)]
powerpc/powernv: Set or clear security feature flags

Now that we have feature flags for security related things, set or
clear them based on what we see in the device tree provided by
firmware.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/pseries: Set or clear security feature flags
Michael Ellerman [Tue, 27 Mar 2018 12:01:46 +0000 (23:01 +1100)]
powerpc/pseries: Set or clear security feature flags

Now that we have feature flags for security related things, set or
clear them based on what we receive from the hypercall.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc: Add security feature flags for Spectre/Meltdown
Michael Ellerman [Tue, 27 Mar 2018 12:01:44 +0000 (23:01 +1100)]
powerpc: Add security feature flags for Spectre/Meltdown

This commit adds security feature flags to reflect the settings we
receive from firmware regarding Spectre/Meltdown mitigations.

The feature names reflect the names we are given by firmware on bare
metal machines. See the hostboot source for details.

Arguably these could be firmware features, but that then requires them
to be read early in boot so they're available prior to asm feature
patching, but we don't actually want to use them for patching. We may
also want to dynamically update them in future, which would be
incompatible with the way firmware features work (at the moment at
least). So for now just make them separate flags.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/pseries: Add new H_GET_CPU_CHARACTERISTICS flags
Michael Ellerman [Tue, 27 Mar 2018 12:01:45 +0000 (23:01 +1100)]
powerpc/pseries: Add new H_GET_CPU_CHARACTERISTICS flags

Add some additional values which have been defined for the
H_GET_CPU_CHARACTERISTICS hypercall.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/rfi-flush: Call setup_rfi_flush() after LPM migration
Michael Ellerman [Wed, 14 Mar 2018 22:40:42 +0000 (19:40 -0300)]
powerpc/rfi-flush: Call setup_rfi_flush() after LPM migration

We might have migrated to a machine that uses a different flush type,
or doesn't need flushing at all.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Mauricio Faria de Oliveira <mauricfo@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/rfi-flush: Differentiate enabled and patched flush types
Mauricio Faria de Oliveira [Wed, 14 Mar 2018 22:40:41 +0000 (19:40 -0300)]
powerpc/rfi-flush: Differentiate enabled and patched flush types

Currently the rfi-flush messages print 'Using <type> flush' for all
enabled_flush_types, but that is not necessarily true -- as now the
fallback flush is always enabled on pseries, but the fixup function
overwrites its nop/branch slot with other flush types, if available.

So, replace the 'Using <type> flush' messages with '<type> flush is
available'.

Also, print the patched flush types in the fixup function, so users
can know what is (not) being used (e.g., the slower, fallback flush,
or no flush type at all if flush is disabled via the debugfs switch).

Suggested-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Mauricio Faria de Oliveira <mauricfo@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/rfi-flush: Always enable fallback flush on pseries
Michael Ellerman [Wed, 14 Mar 2018 22:40:40 +0000 (19:40 -0300)]
powerpc/rfi-flush: Always enable fallback flush on pseries

This ensures the fallback flush area is always allocated on pseries,
so in case a LPAR is migrated from a patched to an unpatched system,
it is possible to enable the fallback flush in the target system.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Mauricio Faria de Oliveira <mauricfo@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/rfi-flush: Make it possible to call setup_rfi_flush() again
Michael Ellerman [Wed, 14 Mar 2018 22:40:39 +0000 (19:40 -0300)]
powerpc/rfi-flush: Make it possible to call setup_rfi_flush() again

For PowerVM migration we want to be able to call setup_rfi_flush()
again after we've migrated the partition.

To support that we need to check that we're not trying to allocate the
fallback flush area after memblock has gone away (i.e., boot-time only).

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Mauricio Faria de Oliveira <mauricfo@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/rfi-flush: Move the logic to avoid a redo into the debugfs code
Michael Ellerman [Wed, 14 Mar 2018 22:40:38 +0000 (19:40 -0300)]
powerpc/rfi-flush: Move the logic to avoid a redo into the debugfs code

rfi_flush_enable() includes a check to see if we're already
enabled (or disabled), and in that case does nothing.

But that means calling setup_rfi_flush() a 2nd time doesn't actually
work, which is a bit confusing.

Move that check into the debugfs code, where it really belongs.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Mauricio Faria de Oliveira <mauricfo@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/perf: Add blacklisted events for Power9 DD2.2
Madhavan Srinivasan [Sun, 4 Mar 2018 11:56:28 +0000 (17:26 +0530)]
powerpc/perf: Add blacklisted events for Power9 DD2.2

These events either do not count, or do not count correctly, so to
prevent user confusion block counting them at all.

Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
[mpe: Change log]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/perf: Add blacklisted events for Power9 DD2.1
Madhavan Srinivasan [Sun, 4 Mar 2018 11:56:27 +0000 (17:26 +0530)]
powerpc/perf: Add blacklisted events for Power9 DD2.1

These events either do not count, or do not count correctly, so to
prevent user confusion block counting them at all.

Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
[mpe: Change log]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/perf: Infrastructure to support addition of blacklisted events
Madhavan Srinivasan [Sun, 4 Mar 2018 11:56:26 +0000 (17:26 +0530)]
powerpc/perf: Infrastructure to support addition of blacklisted events

Introduce code to support addition of blacklisted events for a
processor version. Blacklisted events are events that are known to not
count correctly on that CPU revision, and so should be prevented from
being counted so as to avoid user confusion.

A 'pointer' and 'int' variable to hold the number of events are added
to 'struct power_pmu', along with a generic function to loop through
the list to validate the given event. Generic function
'is_event_blacklisted' is called in power_pmu_event_init() to detect
and reject early.

Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/perf: Prevent kernel address leak via perf_get_data_addr()
Madhavan Srinivasan [Wed, 21 Mar 2018 11:40:26 +0000 (17:10 +0530)]
powerpc/perf: Prevent kernel address leak via perf_get_data_addr()

Sampled Data Address Register (SDAR) is a 64-bit register that
contains the effective address of the storage operand of an
instruction that was being executed, possibly out-of-order, at or
around the time that the Performance Monitor alert occurred.

In certain scenario SDAR happen to contain the kernel address even for
userspace only sampling. Add checks to prevent it.

Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/perf: Prevent kernel address leak to userspace via BHRB buffer
Madhavan Srinivasan [Wed, 21 Mar 2018 11:40:25 +0000 (17:10 +0530)]
powerpc/perf: Prevent kernel address leak to userspace via BHRB buffer

The current Branch History Rolling Buffer (BHRB) code does not check
for any privilege levels before updating the data from BHRB. This
could leak kernel addresses to userspace even when profiling only with
userspace privileges. Add proper checks to prevent it.

Acked-by: Balbir Singh <bsingharora@gmail.com>
Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/perf: Fix kernel address leak via sampling registers
Michael Ellerman [Wed, 21 Mar 2018 11:40:24 +0000 (17:10 +0530)]
powerpc/perf: Fix kernel address leak via sampling registers

Current code in power_pmu_disable() does not clear the sampling
registers like Sampling Instruction Address Register (SIAR) and
Sampling Data Address Register (SDAR) after disabling the PMU. Since
these are userspace readable and could contain kernel addresses, add
code to explicitly clear the content of these registers.

Also add a "context synchronizing instruction" to enforce no further
updates to these registers as suggested by Power ISA v3.0B. From
section 9.4, on page 1108:

  "If an mtspr instruction is executed that changes the value of a
  Performance Monitor register other than SIAR, SDAR, and SIER, the
  change is not guaranteed to have taken effect until after a
  subsequent context synchronizing instruction has been executed (see
  Chapter 11. "Synchronization Requirements for Context Alterations"
  on page 1133)."

Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
[mpe: Massage change log and add ISA reference]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/64: Call H_REGISTER_PROC_TBL when running as a HPT guest on POWER9
Paul Mackerras [Thu, 16 Feb 2017 05:03:39 +0000 (16:03 +1100)]
powerpc/64: Call H_REGISTER_PROC_TBL when running as a HPT guest on POWER9

On POWER9, since commit cc3d2940133d ("powerpc/64: Enable use of radix
MMU under hypervisor on POWER9", 2017-01-30), we set both the radix and
HPT bits in the client-architecture-support (CAS) vector, which tells
the hypervisor that we can do either radix or HPT.  According to PAPR,
if we use this combination we are promising to do a H_REGISTER_PROC_TBL
hcall later on to let the hypervisor know whether we are doing radix
or HPT.  We currently do this call if we are doing radix but not if
we are doing HPT.  If the hypervisor is able to support both radix
and HPT guests, it would be entitled to defer allocation of the HPT
until the H_REGISTER_PROC_TBL call, and to fail any attempts to create
HPTEs until the H_REGISTER_PROC_TBL call.  Thus we need to do a
H_REGISTER_PROC_TBL call when we are doing HPT; otherwise we may
crash at boot time.

This adds the code to call H_REGISTER_PROC_TBL in this case, before
we attempt to create any HPT entries using H_ENTER.

Fixes: cc3d2940133d ("powerpc/64: Enable use of radix MMU under hypervisor on POWER9")
Cc: stable@vger.kernel.org # v4.11+
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Reviewed-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/64s: Fix i-side SLB miss bad address handler saving nonvolatile GPRs
Nicholas Piggin [Fri, 23 Mar 2018 05:53:38 +0000 (15:53 +1000)]
powerpc/64s: Fix i-side SLB miss bad address handler saving nonvolatile GPRs

The SLB bad address handler's trap number fixup does not preserve the
low bit that indicates nonvolatile GPRs have not been saved. This
leads save_nvgprs to skip saving them, and subsequent functions and
return from interrupt will think they are saved.

This causes kernel branch-to-garbage debugging to not have correct
registers, can also cause userspace to have its registers clobbered
after a segfault.

Fixes: f0f558b131db ("powerpc/mm: Preserve CFAR value on SLB miss caused by access to bogus address")
Cc: stable@vger.kernel.org # v4.9+
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agoMerge branch 'topic/ppc-kvm' into next
Michael Ellerman [Fri, 23 Mar 2018 21:43:18 +0000 (08:43 +1100)]
Merge branch 'topic/ppc-kvm' into next

This brings in two series from Paul, one of which touches KVM code and
may need to be merged into the kvm-ppc tree to resolve conflicts.

6 years agoKVM: PPC: Book3S HV: Work around TEXASR bug in fake suspend state
Paul Mackerras [Wed, 21 Mar 2018 10:32:03 +0000 (21:32 +1100)]
KVM: PPC: Book3S HV: Work around TEXASR bug in fake suspend state

This works around a hardware bug in "Nimbus" POWER9 DD2.2 processors,
where the contents of the TEXASR can get corrupted while a thread is
in fake suspend state.  The workaround is for the instruction emulation
code to use the value saved at the most recent guest exit in real
suspend mode.  We achieve this by simply not saving the TEXASR into
the vcpu struct on an exit in fake suspend state.  We also have to
take care to set the orig_texasr field only on guest exit in real
suspend state.

This also means that on guest entry in fake suspend state, TEXASR
will be restored to the value it had on the last exit in real suspend
state, effectively counteracting any hardware-caused corruption.  This
works because TEXASR may not be written in suspend state.

With this, the guest might see the wrong values in TEXASR if it reads
it while in suspend state, but will see the correct value in
non-transactional state (e.g. after a treclaim), and treclaim will
work correctly.

With this workaround, the code will actually run slightly faster, and
will operate correctly on systems without the TEXASR bug (since TEXASR
may not be written in suspend state, and is only changed by failure
recording, which will have already been done before we get into fake
suspend state).  Therefore these changes are not made subject to a CPU
feature bit.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agoKVM: PPC: Book3S HV: Work around XER[SO] bug in fake suspend mode
Suraj Jitindar Singh [Wed, 21 Mar 2018 10:32:02 +0000 (21:32 +1100)]
KVM: PPC: Book3S HV: Work around XER[SO] bug in fake suspend mode

This works around a hardware bug in "Nimbus" POWER9 DD2.2 processors,
where a treclaim performed in fake suspend mode can cause subsequent
reads from the XER register to return inconsistent values for the SO
(summary overflow) bit.  The inconsistent SO bit state can potentially
be observed on any thread in the core.  We have to do the treclaim
because that is the only way to get the thread out of suspend state
(fake or real) and into non-transactional state.

The workaround for the bug is to force the core into SMT4 mode before
doing the treclaim.  This patch adds the code to do that, conditional
on the CPU_FTR_P9_TM_XER_SO_BUG feature bit.

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agoKVM: PPC: Book3S HV: Work around transactional memory bugs in POWER9
Paul Mackerras [Wed, 21 Mar 2018 10:32:01 +0000 (21:32 +1100)]
KVM: PPC: Book3S HV: Work around transactional memory bugs in POWER9

POWER9 has hardware bugs relating to transactional memory and thread
reconfiguration (changes to hardware SMT mode).  Specifically, the core
does not have enough storage to store a complete checkpoint of all the
architected state for all four threads.  The DD2.2 version of POWER9
includes hardware modifications designed to allow hypervisor software
to implement workarounds for these problems.  This patch implements
those workarounds in KVM code so that KVM guests see a full, working
transactional memory implementation.

The problems center around the use of TM suspended state, where the
CPU has a checkpointed state but execution is not transactional.  The
workaround is to implement a "fake suspend" state, which looks to the
guest like suspended state but the CPU does not store a checkpoint.
In this state, any instruction that would cause a transition to
transactional state (rfid, rfebb, mtmsrd, tresume) or would use the
checkpointed state (treclaim) causes a "soft patch" interrupt (vector
0x1500) to the hypervisor so that it can be emulated.  The trechkpt
instruction also causes a soft patch interrupt.

On POWER9 DD2.2, we avoid returning to the guest in any state which
would require a checkpoint to be present.  The trechkpt in the guest
entry path which would normally create that checkpoint is replaced by
either a transition to fake suspend state, if the guest is in suspend
state, or a rollback to the pre-transactional state if the guest is in
transactional state.  Fake suspend state is indicated by a flag in the
PACA plus a new bit in the PSSCR.  The new PSSCR bit is write-only and
reads back as 0.

On exit from the guest, if the guest is in fake suspend state, we still
do the treclaim instruction as we would in real suspend state, in order
to get into non-transactional state, but we do not save the resulting
register state since there was no checkpoint.

Emulation of the instructions that cause a softpatch interrupt is
handled in two paths.  If the guest is in real suspend mode, we call
kvmhv_p9_tm_emulation_early() to handle the cases where the guest is
transitioning to transactional state.  This is called before we do the
treclaim in the guest exit path; because we haven't done treclaim, we
can get back to the guest with the transaction still active.  If the
instruction is a case that kvmhv_p9_tm_emulation_early() doesn't
handle, or if the guest is in fake suspend state, then we proceed to
do the complete guest exit path and subsequently call
kvmhv_p9_tm_emulation() in host context with the MMU on.  This handles
all the cases including the cases that generate program interrupts
(illegal instruction or TM Bad Thing) and facility unavailable
interrupts.

The emulation is reasonably straightforward and is mostly concerned
with checking for exception conditions and updating the state of
registers such as MSR and CR0.  The treclaim emulation takes care to
ensure that the TEXASR register gets updated as if it were the guest
treclaim instruction that had done failure recording, not the treclaim
done in hypervisor state in the guest exit path.

With this, the KVM_CAP_PPC_HTM capability returns true (1) even if
transactional memory is not available to host userspace.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/powernv: Provide a way to force a core into SMT4 mode
Paul Mackerras [Wed, 21 Mar 2018 10:32:00 +0000 (21:32 +1100)]
powerpc/powernv: Provide a way to force a core into SMT4 mode

POWER9 processors up to and including "Nimbus" v2.2 have hardware
bugs relating to transactional memory and thread reconfiguration.
One of these bugs has a workaround which is to get the core into
SMT4 state temporarily.  This workaround is only needed when
running bare-metal.

This patch provides a function which gets the core into SMT4 mode
by preventing threads from going to a stop state, and waking up
those which are already in a stop state.  Once at least 3 threads
are not in a stop state, the core will be in SMT4 and we can
continue.

To do this, we add a "dont_stop" flag to the paca to tell the
thread not to go into a stop state.  If this flag is set,
power9_idle_stop() just returns immediately with a return value
of 0.  The pnv_power9_force_smt4_catch() function does the following:

1. Set the dont_stop flag for each thread in the core, except
   ourselves (in fact we use an atomic_inc() in case more than
   one thread is calling this function concurrently).
2. See how many threads are awake, indicated by their
   requested_psscr field in the paca being 0.  If this is at
   least 3, skip to step 5.
3. Send a doorbell interrupt to each thread that was seen as
   being in a stop state in step 2.
4. Until at least 3 threads are awake, scan the threads to which
   we sent a doorbell interrupt and check if they are awake now.

This relies on the following properties:

- Once dont_stop is non-zero, requested_psccr can't go from zero to
  non-zero, except transiently (and without the thread doing stop).
- requested_psscr being zero guarantees that the thread isn't in
  a state-losing stop state where thread reconfiguration could occur.
- Doing stop with a PSSCR value of 0 won't be a state-losing stop
  and thus won't allow thread reconfiguration.
- Once threads_per_core/2 + 1 (i.e. 3) threads are awake, the core
  must be in SMT4 mode, since SMT modes are powers of 2.

This does add a sync to power9_idle_stop(), which is necessary to
provide the correct ordering between setting requested_psscr and
checking dont_stop.  The overhead of the sync should be unnoticeable
compared to the latency of going into and out of a stop state.

Because some objected to incurring this extra latency on systems where
the XER[SO] bug is not relevant, I have put the test in
power9_idle_stop inside a feature section.  This means that
pnv_power9_force_smt4_catch() WILL NOT WORK correctly on systems
without the CPU_FTR_P9_TM_XER_SO_BUG feature bit set, and will
probably hang the system.

In order to cater for uses where the caller has an operation that
has to be done while the core is in SMT4, the core continues to be
kept in SMT4 after pnv_power9_force_smt4_catch() function returns,
until the pnv_power9_force_smt4_release() function is called.
It undoes the effect of step 1 above and allows the other threads
to go into a stop state.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc: Add CPU feature bits for TM bug workarounds on POWER9 v2.2
Paul Mackerras [Wed, 21 Mar 2018 10:31:59 +0000 (21:31 +1100)]
powerpc: Add CPU feature bits for TM bug workarounds on POWER9 v2.2

This adds a CPU feature bit which is set for POWER9 "Nimbus" DD2.2
processors which will be used to enable the hypervisor to assist
hardware with the handling of checkpointed register values while the
CPU is in suspend state, in order to work around hardware bugs.  The
hardware assistance for these workarounds introduced a new hardware
bug relating to the XER[SO] bit.  We add a separate feature bit for
this bug in case future chips fix it while still requiring the
hypervisor assistance with suspend state.

When the dt_cpu_ftrs subsystem is in use, the software assistance can
be enabled using a "tm-suspend-hypervisor-assist" node in the device
tree, and a "tm-suspend-xer-so-bug" node enables the workarounds for
the XER[SO] bug.  In the absence of such nodes, a quirk enables both
for POWER9 "Nimbus" DD2.2 processors.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc: Free up CPU feature bits on 64-bit machines
Paul Mackerras [Mon, 19 Mar 2018 21:46:13 +0000 (08:46 +1100)]
powerpc: Free up CPU feature bits on 64-bit machines

This moves all the CPU feature bits that are only used on 32-bit
machines to the top 20 bits of the CPU feature word and arranges
for them to be defined only in 32-bit builds.  The features that
are common to 32-bit and 64-bit machines are moved to bits 0-11
of the CPU feature word.  This means that for 64-bit platforms,
bits 44-63 can now be used for new features that only exist on
64-bit machines.  (These bit numbers are counting from the right,
i.e. the LSB is bit 0.)

Because CPU_FTR_L3_DISABLE_NAP moved from the low 16 bits to the high
16 bits, we have to adjust some assembly code.  Also, CPU_FTR_EMB_HV
moved from the high 16 bits to the low 16 bits.

Note that CPU_FTR_REAL_LE only applies to 64-bit chips, because only
64-bit chips (POWER6, 7, 8, 9) have a true little-endian mode that is
a CPU execution mode as opposed to being a page attribute.

With this we now have 20 free CPU feature bits on 64-bit machines.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc: Book E: Remove unused CPU_FTR_L2CSR bit
Paul Mackerras [Mon, 19 Mar 2018 21:46:12 +0000 (08:46 +1100)]
powerpc: Book E: Remove unused CPU_FTR_L2CSR bit

The CPU_FTR_L2CSR bit is never tested anywhere, so let's reclaim the
bit.

The last usage was removed in 86d63363defc ("powerpc/e500mc: Remove
dead L2 flushing code in idle_e500.S") (Jun 2015).

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc: Use feature bit for RTC presence rather than timebase presence
Paul Mackerras [Mon, 19 Mar 2018 21:46:11 +0000 (08:46 +1100)]
powerpc: Use feature bit for RTC presence rather than timebase presence

All PowerPC CPUs other than the original PPC601 have a timebase
register rather than the "real-time clock" (RTC) register that the
PPC601 (and the original POWER and POWER2 CPUs) had.  Currently
we have a CPU feature bit to indicate the presence of the timebase,
but it makes more sense to use a bit to indicate the unusual
situation rather than the common situation.  This therefore defines
a CPU_FTR_USE_RTC bit in place of the CPU_FTR_USE_TB bit, and
arranges for it to be set on PPC601 systems.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/mm: Fixup tlbie vs store ordering issue on POWER9
Aneesh Kumar K.V [Fri, 23 Mar 2018 04:56:27 +0000 (10:26 +0530)]
powerpc/mm: Fixup tlbie vs store ordering issue on POWER9

On POWER9, under some circumstances, a broadcast TLB invalidation
might complete before all previous stores have drained, potentially
allowing stale stores from becoming visible after the invalidation.
This works around it by doubling up those TLB invalidations which was
verified by HW to be sufficient to close the risk window.

This will be documented in a yet-to-be-published errata.

Fixes: 1a472c9dba6b ("powerpc/mm/radix: Add tlbflush routines")
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
[mpe: Enable the feature in the DT CPU features code for all Power9,
      rename the feature to CPU_FTR_P9_TLBIE_BUG per benh.]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/mm/radix: Move the functions that does the actual tlbie closer
Aneesh Kumar K.V [Fri, 23 Mar 2018 04:56:26 +0000 (10:26 +0530)]
powerpc/mm/radix: Move the functions that does the actual tlbie closer

No functionality change. Just code movement to ease code changes later

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/mm/radix: Remove unused code
Aneesh Kumar K.V [Fri, 23 Mar 2018 04:56:25 +0000 (10:26 +0530)]
powerpc/mm/radix: Remove unused code

These function are not used in the code. Remove them.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/mm: Workaround Nest MMU bug with TLB invalidations
Benjamin Herrenschmidt [Thu, 22 Mar 2018 22:29:06 +0000 (09:29 +1100)]
powerpc/mm: Workaround Nest MMU bug with TLB invalidations

On POWER9 the Nest MMU may fail to invalidate some translations when
doing a tlbie "by PID" or "by LPID" that is targeted at the TLB only
and not the page walk cache.

This works around it by forcing such invalidations to escalate to
RIC=2 (full invalidation of TLB *and* PWC) when a coprocessor is in
use for the context.

Fixes: 03b8abedf4f4 ("cxl: Enable global TLBIs for cxl contexts")
Cc: stable@vger.kernel.org # v4.15+
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Balbir Singh <bsingharora@gmail.com>
[balbirs: fixed spelling and coding style to quiesce checkpatch.pl]
Tested-by: Balbir Singh <bsingharora@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/mm: Add tracking of the number of coprocessors using a context
Benjamin Herrenschmidt [Thu, 22 Mar 2018 22:29:05 +0000 (09:29 +1100)]
powerpc/mm: Add tracking of the number of coprocessors using a context

Currently, when using coprocessors (which use the Nest MMU), we
simply increment the active_cpu count to force all TLB invalidations
to be come broadcast.

Unfortunately, due to an errata in POWER9, we will need to know
more specifically that coprocessors are in use.

This maintains a separate copros counter in the MMU context for
that purpose.

NB. The commit mentioned in the fixes tag below is not at fault for
the bug we're fixing in this commit and the next, but this fix applies
on top the infrastructure it introduced.

Fixes: 03b8abedf4f4 ("cxl: Enable global TLBIs for cxl contexts")
Cc: stable@vger.kernel.org # v4.15+
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Tested-by: Balbir Singh <bsingharora@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/64s: Fix lost pending interrupt due to race causing lost update to irq_happened
Nicholas Piggin [Wed, 21 Mar 2018 02:22:28 +0000 (12:22 +1000)]
powerpc/64s: Fix lost pending interrupt due to race causing lost update to irq_happened

force_external_irq_replay() can be called in the do_IRQ path with
interrupts hard enabled and soft disabled if may_hard_irq_enable() set
MSR[EE]=1. It updates local_paca->irq_happened with a load, modify,
store sequence. If a maskable interrupt hits during this sequence, it
will go to the masked handler to be marked pending in irq_happened.
This update will be lost when the interrupt returns and the store
instruction executes. This can result in unpredictable latencies,
timeouts, lockups, etc.

Fix this by ensuring hard interrupts are disabled before modifying
irq_happened.

This could cause any maskable asynchronous interrupt to get lost, but
it was noticed on P9 SMP system doing RDMA NVMe target over 100GbE,
so very high external interrupt rate and high IPI rate. The hang was
bisected down to enabling doorbell interrupts for IPIs. These provided
an interrupt type that could run at high rates in the do_IRQ path,
stressing the race.

Fixes: 1d607bb3bd60 ("powerpc/irq: Add mechanism to force a replay of interrupts")
Cc: stable@vger.kernel.org # v4.8+
Reported-by: Carol L. Soto <clsoto@us.ibm.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc: dts: replace 'linux,stdout-path' with 'stdout-path'
Rob Herring [Wed, 28 Feb 2018 22:44:06 +0000 (16:44 -0600)]
powerpc: dts: replace 'linux,stdout-path' with 'stdout-path'

'linux,stdout-path' has been deprecated for some time in favor of
'stdout-path'. Now dtc will warn on occurrences of 'linux,stdout-path'.
Search and replace all the of occurrences with 'stdout-path'.

Signed-off-by: Rob Herring <robh@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: linuxppc-dev@lists.ozlabs.org
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agoselftests/powerpc: Add process creation benchmark
Nicholas Piggin [Tue, 6 Mar 2018 13:24:58 +0000 (23:24 +1000)]
selftests/powerpc: Add process creation benchmark

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
[mpe: Add SPDX, and fixup formatting]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc: Use sizeof(*foo) rather than sizeof(struct foo)
Markus Elfring [Thu, 19 Jan 2017 16:15:30 +0000 (17:15 +0100)]
powerpc: Use sizeof(*foo) rather than sizeof(struct foo)

It's slightly less error prone to use sizeof(*foo) rather than
specifying the type.

Signed-off-by: Markus Elfring <elfring@users.sourceforge.net>
[mpe: Consolidate into one patch, rewrite change log]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc: Remove unused flush_dcache_phys_range()
Matt Brown [Thu, 20 Jul 2017 06:25:14 +0000 (16:25 +1000)]
powerpc: Remove unused flush_dcache_phys_range()

The flush_dcache_phys_range() function is no longer used in the
kernel. The last usage was removed in c40785ad305b ("powerpc/dart: Use
a cachable DART").

This patch removes the function and declaration.

Signed-off-by: Matt Brown <matthew.brown.dev@gmail.com>
[mpe: Munge change log, include commit that removed last user]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>