platform/kernel/linux-rpi.git
6 years agopowerpc/misc: merge reloc_offset() and add_reloc_offset()
Christophe Leroy [Tue, 17 Apr 2018 11:23:10 +0000 (13:23 +0200)]
powerpc/misc: merge reloc_offset() and add_reloc_offset()

reloc_offset() is the same as add_reloc_offset(0)

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/64: optimises from64to32()
Christophe Leroy [Tue, 10 Apr 2018 06:34:35 +0000 (08:34 +0200)]
powerpc/64: optimises from64to32()

The current implementation of from64to32() gives a poor result:

0000000000000270 <.from64to32>:
 270: 38 00 ff ff  li      r0,-1
 274: 78 69 00 22  rldicl  r9,r3,32,32
 278: 78 00 00 20  clrldi  r0,r0,32
 27c: 7c 60 00 38  and     r0,r3,r0
 280: 7c 09 02 14  add     r0,r9,r0
 284: 78 09 00 22  rldicl  r9,r0,32,32
 288: 7c 00 4a 14  add     r0,r0,r9
 28c: 78 03 00 20  clrldi  r3,r0,32
 290: 4e 80 00 20  blr

This patch modifies from64to32() to operate in the same
spirit as csum_fold()

It swaps the two 32-bit halves of sum then it adds it with the
unswapped sum. If there is a carry from adding the two 32-bit halves,
it will carry from the lower half into the upper half, giving us the
correct sum in the upper half.

The resulting code is:

0000000000000260 <.from64to32>:
 260: 78 60 00 02  rotldi  r0,r3,32
 264: 7c 60 1a 14  add     r3,r0,r3
 268: 78 63 00 22  rldicl  r3,r3,32,32
 26c: 4e 80 00 20  blr

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/mm: Remove stale_map[] handling on non SMP processors
Christophe Leroy [Wed, 21 Mar 2018 14:07:53 +0000 (15:07 +0100)]
powerpc/mm: Remove stale_map[] handling on non SMP processors

stale_map[] bits are only set in steal_context_smp() so
on UP processors this map is useless. Only manage it for SMP
processors.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/mm: constify LAST_CONTEXT in mmu_context_nohash
Christophe Leroy [Wed, 21 Mar 2018 14:07:51 +0000 (15:07 +0100)]
powerpc/mm: constify LAST_CONTEXT in mmu_context_nohash

last_context is 16 on the 8xx, 65535 on the 47x and 255 on other ones.

The kernel is exclusively built for the 8xx, for the 47x or for
another processor so the last context can be defined as a constant
depending on the processor.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
[mpe: Reformat old comment]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/mm: Avoid unnecessary test and reduce code size
Christophe Leroy [Wed, 21 Mar 2018 14:07:49 +0000 (15:07 +0100)]
powerpc/mm: Avoid unnecessary test and reduce code size

no_selective_tlbil hence the use of either steal_all_contexts()
or steal_context_up() depends on the subarch, it won't change
during run. Only the 8xx uses steal_all_contexts and CONFIG_PPC_8xx
is exclusive of other processors.

This patch replaces the test of no_selective_tlbil global var by
a test of CONFIG_PPC_8xx selection. It avoids the test and
removes unnecessary code.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/mm: constify FIRST_CONTEXT in mmu_context_nohash
Christophe Leroy [Wed, 21 Mar 2018 14:07:47 +0000 (15:07 +0100)]
powerpc/mm: constify FIRST_CONTEXT in mmu_context_nohash

First context is now 1 for all supported platforms, so it
can be made a constant.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/dma: remove unnecessary BUG()
Christophe Leroy [Wed, 28 Feb 2018 18:21:45 +0000 (19:21 +0100)]
powerpc/dma: remove unnecessary BUG()

Direction is already checked in all calling functions in
include/linux/dma-mapping.h and also in called function __dma_sync()

So really no need to check it once more here.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/sstep: Fix emulate_step test if VSX not present
Ravi Bangoria [Mon, 21 May 2018 04:21:08 +0000 (09:51 +0530)]
powerpc/sstep: Fix emulate_step test if VSX not present

emulate_step() tests are failing if VSX is not supported or disabled.

  emulate_step_test: lxvd2x         : FAIL
  emulate_step_test: stxvd2x        : FAIL

If !CPU_FTR_VSX, emulate_step() failure is expected and testcase should
PASS with a valid justification. After patch:

  emulate_step_test: lxvd2x         : PASS (!CPU_FTR_VSX)
  emulate_step_test: stxvd2x        : PASS (!CPU_FTR_VSX)

Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/sstep: Fix kernel crash if VSX is not present
Ravi Bangoria [Mon, 21 May 2018 04:21:07 +0000 (09:51 +0530)]
powerpc/sstep: Fix kernel crash if VSX is not present

emulate_step() is not checking runtime VSX feature flag before
emulating an instruction. This is causing kernel crash when kernel
is compiled with CONFIG_VSX=y but running on a machine where VSX
is not supported or disabled. Ex, while running emulate_step tests
on P6 machine:

  Oops: Exception in kernel mode, sig: 4 [#1]
  NIP [c000000000095c24] .load_vsrn+0x28/0x54
  LR [c000000000094bdc] .emulate_loadstore+0x167c/0x17b0
  Call Trace:
    0x40fe240c7ae147ae (unreliable)
    .emulate_loadstore+0x167c/0x17b0
    .emulate_step+0x25c/0x5bc
    .test_lxvd2x_stxvd2x+0x64/0x154
    .test_emulate_step+0x38/0x4c
    .do_one_initcall+0x5c/0x2c0
    .kernel_init_freeable+0x314/0x4cc
    .kernel_init+0x24/0x160
    .ret_from_kernel_thread+0x58/0xb4

With fix:
  emulate_step_test: lxvd2x         : FAIL
  emulate_step_test: stxvd2x        : FAIL

Reported-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/sstep: Introduce GETTYPE macro
Ravi Bangoria [Mon, 21 May 2018 04:21:06 +0000 (09:51 +0530)]
powerpc/sstep: Introduce GETTYPE macro

Replace 'op->type & INSTR_TYPE_MASK' expression with GETTYPE(op->type)
macro.

Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agoselftests/powerpc: Add perf breakpoint test
Michael Neuling [Mon, 28 May 2018 23:22:38 +0000 (09:22 +1000)]
selftests/powerpc: Add perf breakpoint test

This tests perf hardware breakpoints (ie PERF_TYPE_BREAKPOINT) on
powerpc.

Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/64s: Enhance the information in cpu_show_spectre_v1()
Michal Suchanek [Mon, 28 May 2018 13:19:14 +0000 (15:19 +0200)]
powerpc/64s: Enhance the information in cpu_show_spectre_v1()

We now have barrier_nospec as mitigation so print it in
cpu_show_spectre_v1() when enabled.

Signed-off-by: Michal Suchanek <msuchanek@suse.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/64: Use barrier_nospec in syscall entry
Michael Ellerman [Tue, 24 Apr 2018 04:15:59 +0000 (14:15 +1000)]
powerpc/64: Use barrier_nospec in syscall entry

Our syscall entry is done in assembly so patch in an explicit
barrier_nospec.

Based on a patch by Michal Suchanek.

Signed-off-by: Michal Suchanek <msuchanek@suse.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc: Use barrier_nospec in copy_from_user()
Michael Ellerman [Tue, 24 Apr 2018 04:15:58 +0000 (14:15 +1000)]
powerpc: Use barrier_nospec in copy_from_user()

Based on the x86 commit doing the same.

See commit 304ec1b05031 ("x86/uaccess: Use __uaccess_begin_nospec()
and uaccess_try_nospec") and b3bbfb3fb5d2 ("x86: Introduce
__uaccess_begin_nospec() and uaccess_try_nospec") for more detail.

In all cases we are ordering the load from the potentially
user-controlled pointer vs a previous branch based on an access_ok()
check or similar.

Base on a patch from Michal Suchanek.

Signed-off-by: Michal Suchanek <msuchanek@suse.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/64s: Enable barrier_nospec based on firmware settings
Michal Suchanek [Tue, 24 Apr 2018 04:15:57 +0000 (14:15 +1000)]
powerpc/64s: Enable barrier_nospec based on firmware settings

Check what firmware told us and enable/disable the barrier_nospec as
appropriate.

We err on the side of enabling the barrier, as it's no-op on older
systems, see the comment for more detail.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/64s: Patch barrier_nospec in modules
Michal Suchanek [Tue, 24 Apr 2018 04:15:56 +0000 (14:15 +1000)]
powerpc/64s: Patch barrier_nospec in modules

Note that unlike RFI which is patched only in kernel the nospec state
reflects settings at the time the module was loaded.

Iterating all modules and re-patching every time the settings change
is not implemented.

Based on lwsync patching.

Signed-off-by: Michal Suchanek <msuchanek@suse.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/64s: Add support for ori barrier_nospec patching
Michal Suchanek [Tue, 24 Apr 2018 04:15:55 +0000 (14:15 +1000)]
powerpc/64s: Add support for ori barrier_nospec patching

Based on the RFI patching. This is required to be able to disable the
speculation barrier.

Only one barrier type is supported and it does nothing when the
firmware does not enable it. Also re-patching modules is not supported
So the only meaningful thing that can be done is patching out the
speculation barrier at boot when the user says it is not wanted.

Signed-off-by: Michal Suchanek <msuchanek@suse.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/64s: Add barrier_nospec
Michal Suchanek [Tue, 24 Apr 2018 04:15:54 +0000 (14:15 +1000)]
powerpc/64s: Add barrier_nospec

A no-op form of ori (or immediate of 0 into r31 and the result stored
in r31) has been re-tasked as a speculation barrier. The instruction
only acts as a barrier on newer machines with appropriate firmware
support. On older CPUs it remains a harmless no-op.

Implement barrier_nospec using this instruction.

mpe: The semantics of the instruction are believed to be that it
prevents execution of subsequent instructions until preceding branches
have been fully resolved and are no longer executing speculatively.
There is no further documentation available at this time.

Signed-off-by: Michal Suchanek <msuchanek@suse.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/stacktrace: Update copyright
Michael Ellerman [Wed, 2 May 2018 13:07:29 +0000 (23:07 +1000)]
powerpc/stacktrace: Update copyright

This now has new code in it written by Nick and I, and switch to a
SPDX tag.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
6 years agopowerpc/64s: Wire up arch_trigger_cpumask_backtrace()
Michael Ellerman [Wed, 2 May 2018 13:07:28 +0000 (23:07 +1000)]
powerpc/64s: Wire up arch_trigger_cpumask_backtrace()

This allows eg. the RCU stall detector, or the soft/hardlockup
detectors to trigger a backtrace on all CPUs.

We implement this by sending a "safe" NMI, which will actually only
send an IPI. Unfortunately the generic code prints "NMI", so that's a
little confusing but we can probably live with it.

If one of the CPUs doesn't respond to the IPI, we then print some info
from it's paca and do a backtrace based on its saved_r1.

Example output:

  INFO: rcu_sched detected stalls on CPUs/tasks:
   2-...0: (0 ticks this GP) idle=1be/1/4611686018427387904 softirq=1055/1055 fqs=25735
   (detected by 4, t=58847 jiffies, g=58, c=57, q=1258)
  Sending NMI from CPU 4 to CPUs 2:
  CPU 2 didn't respond to backtrace IPI, inspecting paca.
  irq_soft_mask: 0x01 in_mce: 0 in_nmi: 0 current: 3623 (bash)
  Back trace of paca->saved_r1 (0xc0000000e1c83ba0) (possibly stale):
  Call Trace:
  [c0000000e1c83ba0] [0000000000000014] 0x14 (unreliable)
  [c0000000e1c83bc0] [c000000000765798] lkdtm_do_action+0x48/0x80
  [c0000000e1c83bf0] [c000000000765a40] direct_entry+0x110/0x1b0
  [c0000000e1c83c90] [c00000000058e650] full_proxy_write+0x90/0xe0
  [c0000000e1c83ce0] [c0000000003aae3c] __vfs_write+0x6c/0x1f0
  [c0000000e1c83d80] [c0000000003ab214] vfs_write+0xd4/0x240
  [c0000000e1c83dd0] [c0000000003ab5cc] ksys_write+0x6c/0x110
  [c0000000e1c83e30] [c00000000000b860] system_call+0x58/0x6c

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
6 years agopowerpc/nmi: Add an API for sending "safe" NMIs
Michael Ellerman [Wed, 2 May 2018 13:07:27 +0000 (23:07 +1000)]
powerpc/nmi: Add an API for sending "safe" NMIs

Currently the options we have for sending NMIs are not necessarily
safe, that is they can potentially interrupt a CPU in a
non-recoverable region of code, meaning the kernel must then panic().

But we'd like to use smp_send_nmi_ipi() to do cross-CPU calls in
situations where we don't want to risk a panic(), because it doesn't
have the requirement that interrupts must be enabled like
smp_call_function().

So add an API for the caller to indicate that it wants to use the NMI
infrastructure, but doesn't want to do anything "unsafe".

Currently that is implemented by not actually calling cause_nmi_ipi(),
instead falling back to an IPI. In future we can pass the safe
parameter down to cause_nmi_ipi() and the individual backends can
potentially take it into account before deciding what to do.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
6 years agopowerpc/64: Save stack pointer when we hard disable interrupts
Michael Ellerman [Wed, 2 May 2018 13:07:26 +0000 (23:07 +1000)]
powerpc/64: Save stack pointer when we hard disable interrupts

A CPU that gets stuck with interrupts hard disable can be difficult to
debug, as on some platforms we have no way to interrupt the CPU to
find out what it's doing.

A stop-gap is to have the CPU save it's stack pointer (r1) in its paca
when it hard disables interrupts. That way if we can't interrupt it,
we can at least trace the stack based on where it last disabled
interrupts.

In some cases that will be total junk, but the stack trace code should
handle that. In the simple case of a CPU that disable interrupts and
then gets stuck in a loop, the stack trace should be informative.

We could clear the saved stack pointer when we enable interrupts, but
that loses information which could be useful if we have nothing else
to go on.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
6 years agopowerpc: Check address limit on user-mode return (TIF_FSCHECK)
Michael Ellerman [Mon, 14 May 2018 13:03:16 +0000 (23:03 +1000)]
powerpc: Check address limit on user-mode return (TIF_FSCHECK)

set_fs() sets the addr_limit, which is used in access_ok() to
determine if an address is a user or kernel address.

Some code paths use set_fs() to temporarily elevate the addr_limit so
that kernel code can read/write kernel memory as if it were user
memory. That is fine as long as the code can't ever return to
userspace with the addr_limit still elevated.

If that did happen, then userspace can read/write kernel memory as if
it were user memory, eg. just with write(2). In case it's not clear,
that is very bad. It has also happened in the past due to bugs.

Commit 5ea0727b163c ("x86/syscalls: Check address limit on user-mode
return") added a mechanism to check the addr_limit value before
returning to userspace. Any call to set_fs() sets a thread flag,
TIF_FSCHECK, and if we see that on the return to userspace we go out
of line to check that the addr_limit value is not elevated.

For further info see the above commit, as well as:
  https://lwn.net/Articles/722267/
  https://bugs.chromium.org/p/project-zero/issues/detail?id=990

Verified to work on 64-bit Book3S using a POC that objdumps the system
call handler, and a modified lkdtm_CORRUPT_USER_DS() that doesn't kill
the caller.

Before:
  $ sudo ./test-tif-fscheck
  ...
  0000000000000000 <.data>:
         0:       e1 f7 8a 79     rldicl. r10,r12,30,63
         4:       80 03 82 40     bne     0x384
         8:       00 40 8a 71     andi.   r10,r12,16384
         c:       78 0b 2a 7c     mr      r10,r1
        10:       10 fd 21 38     addi    r1,r1,-752
        14:       08 00 c2 41     beq-    0x1c
        18:       58 09 2d e8     ld      r1,2392(r13)
        1c:       00 00 41 f9     std     r10,0(r1)
        20:       70 01 61 f9     std     r11,368(r1)
        24:       78 01 81 f9     std     r12,376(r1)
        28:       70 00 01 f8     std     r0,112(r1)
        2c:       78 00 41 f9     std     r10,120(r1)
        30:       20 00 82 41     beq     0x50
        34:       a6 42 4c 7d     mftb    r10

After:

  $ sudo ./test-tif-fscheck
  Killed

And in dmesg:
  Invalid address limit on user-mode return
  WARNING: CPU: 1 PID: 3689 at ../include/linux/syscalls.h:260 do_notify_resume+0x140/0x170
  ...
  NIP [c00000000001ee50] do_notify_resume+0x140/0x170
  LR [c00000000001ee4c] do_notify_resume+0x13c/0x170
  Call Trace:
    do_notify_resume+0x13c/0x170 (unreliable)
    ret_from_except_lite+0x70/0x74

Performance overhead is essentially zero in the usual case, because
the bit is checked as part of the existing _TIF_USER_WORK_MASK check.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc: Rename thread_struct.fs to addr_limit
Michael Ellerman [Mon, 14 May 2018 13:03:15 +0000 (23:03 +1000)]
powerpc: Rename thread_struct.fs to addr_limit

It's called 'fs' for historical reasons, it's named after the x86 'FS'
register. But we don't have to use that name for the member of
thread_struct, and in fact arch/x86 doesn't even call it 'fs' anymore.

So rename it to 'addr_limit', which better reflects what it's used
for, and is also the name used on other arches.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/ptrace: Use copy_{from, to}_user() rather than open-coding
Al Viro [Tue, 29 May 2018 12:57:38 +0000 (22:57 +1000)]
powerpc/ptrace: Use copy_{from, to}_user() rather than open-coding

In PPC_PTRACE_GETHWDBGINFO and PPC_PTRACE_SETHWDEBUG we do an
access_ok() check and then __copy_{from,to}_user().

Instead we should just use copy_{from,to}_user() which does all that
for us and is less error prone.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Reviewed-by: Samuel Mendoza-Jonas <sam@mendozajonas.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/eeh: Refactor report functions
Sam Bobroff [Fri, 25 May 2018 03:11:40 +0000 (13:11 +1000)]
powerpc/eeh: Refactor report functions

The EEH report functions now share a fair bit of code around the start
and end of each function.

So factor out as much as possible, and move the traversal into a
custom function. This also allows accurate debug to be generated more
easily.

Signed-off-by: Sam Bobroff <sbobroff@linux.ibm.com>
[mpe: Format with clang-format]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/eeh: Cleaner handling of EEH_DEV_NO_HANDLER
Sam Bobroff [Fri, 25 May 2018 03:11:39 +0000 (13:11 +1000)]
powerpc/eeh: Cleaner handling of EEH_DEV_NO_HANDLER

If a device without a driver is recovered via EEH, the flag
EEH_DEV_NO_HANDLER is incorrectly left set on the device after
recovery, because the test in eeh_report_resume() for the existence of
a bound driver is done before the flag is cleared. If a driver is
later bound, and EEH experienced again, some of the drivers EEH
handers are not called.

To correct this, clear the flag unconditionally after EEH processing
is complete.

Signed-off-by: Sam Bobroff <sbobroff@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/eeh: Introduce eeh_set_irq_state()
Sam Bobroff [Fri, 25 May 2018 03:11:38 +0000 (13:11 +1000)]
powerpc/eeh: Introduce eeh_set_irq_state()

To ease future refactoring, extract calls to eeh_enable_irq() and
eeh_disable_irq() from the various report functions. This makes
the report functions initial sequences more similar, as well as making
the IRQ changes visible when reading eeh_handle_normal_event().

Signed-off-by: Sam Bobroff <sbobroff@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/eeh: Introduce eeh_set_channel_state()
Sam Bobroff [Fri, 25 May 2018 03:11:37 +0000 (13:11 +1000)]
powerpc/eeh: Introduce eeh_set_channel_state()

To ease future refactoring, extract setting of the channel state
from the report functions out into their own functions. This increases
the amount of code that is identical across all of the report
functions.

Signed-off-by: Sam Bobroff <sbobroff@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/eeh: Introduce eeh_edev_actionable()
Sam Bobroff [Fri, 25 May 2018 03:11:36 +0000 (13:11 +1000)]
powerpc/eeh: Introduce eeh_edev_actionable()

The same test is done in every EEH report function, so factor it out.

Since eeh_dev_removed() needs to be moved higher up in the file,
simplify it a little while we're at it.

Signed-off-by: Sam Bobroff <sbobroff@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/eeh: Introduce eeh_for_each_pe()
Sam Bobroff [Fri, 25 May 2018 03:11:35 +0000 (13:11 +1000)]
powerpc/eeh: Introduce eeh_for_each_pe()

Add a for_each-style macro for iterating through PEs without the
boilerplate required by a traversal function. eeh_pe_next() is now
exported, as it is now used directly in place.

Signed-off-by: Sam Bobroff <sbobroff@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/eeh: Clean up pci_ers_result handling
Sam Bobroff [Fri, 25 May 2018 03:11:34 +0000 (13:11 +1000)]
powerpc/eeh: Clean up pci_ers_result handling

As EEH event handling progresses, a cumulative result of type
pci_ers_result is built up by (some of) the eeh_report_*() functions
using either:
if (rc == PCI_ERS_RESULT_NEED_RESET) *res = rc;
if (*res == PCI_ERS_RESULT_NONE) *res = rc;
or:
if ((*res == PCI_ERS_RESULT_NONE) ||
    (*res == PCI_ERS_RESULT_RECOVERED)) *res = rc;
if (*res == PCI_ERS_RESULT_DISCONNECT &&
    rc == PCI_ERS_RESULT_NEED_RESET) *res = rc;
(Where *res is the accumulator.)

However, the intent is not immediately clear and the result in some
situations is order dependent.

Address this by assigning a priority to each result value, and always
merging to the highest priority. This renders the intent clear, and
provides a stable value for all orderings.

Signed-off-by: Sam Bobroff <sbobroff@linux.ibm.com>
[mpe: Minor formatting (clang-format)]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/eeh: Add message when PE processing at parent
Sam Bobroff [Fri, 25 May 2018 03:11:33 +0000 (13:11 +1000)]
powerpc/eeh: Add message when PE processing at parent

To aid debugging, add a message to show when EEH processing for a PE
will be done at the device's parent, rather than directly at the
device.

Signed-off-by: Sam Bobroff <sbobroff@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/eeh: Strengthen types of eeh traversal functions
Sam Bobroff [Fri, 25 May 2018 03:11:32 +0000 (13:11 +1000)]
powerpc/eeh: Strengthen types of eeh traversal functions

The traversal functions eeh_pe_traverse() and eeh_pe_dev_traverse()
both provide their first argument as void * but every single user casts
it to the expected type.

Change the type of the first parameter from void * to the appropriate
type, and clean up all uses.

Signed-off-by: Sam Bobroff <sbobroff@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/eeh: Remove unused eeh_pcid_name()
Sam Bobroff [Fri, 25 May 2018 03:11:31 +0000 (13:11 +1000)]
powerpc/eeh: Remove unused eeh_pcid_name()

Signed-off-by: Sam Bobroff <sbobroff@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/eeh: Fix use-after-release of EEH driver
Sam Bobroff [Fri, 25 May 2018 03:11:30 +0000 (13:11 +1000)]
powerpc/eeh: Fix use-after-release of EEH driver

Correct two cases where eeh_pcid_get() is used to reference the driver's
module but the reference is dropped before the driver pointer is used.

In eeh_rmv_device() also refactor a little so that only two calls to
eeh_pcid_put() are needed, rather than three and the reference isn't
taken at all if it wasn't needed.

Signed-off-by: Sam Bobroff <sbobroff@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/eeh: Add final message for successful recovery
Sam Bobroff [Fri, 25 May 2018 03:11:28 +0000 (13:11 +1000)]
powerpc/eeh: Add final message for successful recovery

Add a single log line at the end of successful EEH recovery, so that
it's clear that event processing has finished.

Signed-off-by: Sam Bobroff <sbobroff@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/perf: Unregister thread-imc if core-imc not supported
Anju T Sudhakar [Tue, 22 May 2018 09:12:37 +0000 (14:42 +0530)]
powerpc/perf: Unregister thread-imc if core-imc not supported

Since thread-imc internally use the core-imc hardware infrastructure
and is depended on it, having thread-imc in the kernel in the
absence of core-imc is trivial. Patch disables thread-imc, if
core-imc is not registered.

Signed-off-by: Anju T Sudhakar <anju@linux.vnet.ibm.com>
Reviewed-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/perf: Return appropriate value for unknown domain
Anju T Sudhakar [Tue, 22 May 2018 09:12:36 +0000 (14:42 +0530)]
powerpc/perf: Return appropriate value for unknown domain

Return proper error code for unknown domain during IMC initialization.

Signed-off-by: Anju T Sudhakar <anju@linux.vnet.ibm.com>
Reviewed-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/perf: Replace the direct return with goto statement
Anju T Sudhakar [Tue, 22 May 2018 09:12:35 +0000 (14:42 +0530)]
powerpc/perf: Replace the direct return with goto statement

Replace the direct return statement in imc_mem_init() with goto, to adhere
to the kernel coding style.

Signed-off-by: Anju T Sudhakar <anju@linux.vnet.ibm.com>
Reviewed-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/perf: Rearrange memory freeing in imc init
Anju T Sudhakar [Tue, 22 May 2018 09:12:34 +0000 (14:42 +0530)]
powerpc/perf: Rearrange memory freeing in imc init

When any of the IMC (In-Memory Collection counter) devices fail
to initialize, imc_common_mem_free() frees set of memory. In doing so,
pmu_ptr pointer is also freed. But pmu_ptr pointer is used in subsequent
function (imc_common_cpuhp_mem_free()) which is wrong. Patch here reorders
the code to avoid such access.

Also free the memory which is dynamically allocated during imc
initialization, wherever required.

Signed-off-by: Anju T Sudhakar <anju@linux.vnet.ibm.com>
Reviewed-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/xics: Add missing of_node_put() in error path
YueHaibing [Wed, 25 Apr 2018 11:27:07 +0000 (19:27 +0800)]
powerpc/xics: Add missing of_node_put() in error path

The device node obtained with of_find_compatible_node() should be
released by calling of_node_put().  But it was not released when
of_get_property() failed.

Signed-off-by: YueHaibing <yuehaibing@huawei.com>
[mpe: Invert the sense of the if so we only need one return path]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc: cpm_gpio: Remove owner assignment from platform_driver
Fabio Estevam [Sat, 5 May 2018 03:01:25 +0000 (00:01 -0300)]
powerpc: cpm_gpio: Remove owner assignment from platform_driver

Structure platform_driver does not need to set the owner field, as this
will be populated by the driver core.

Generated by scripts/coccinelle/api/platform_no_drv_owner.cocci.

Signed-off-by: Fabio Estevam <fabio.estevam@nxp.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/xive: Remove (almost) unused macros
Russell Currey [Fri, 11 May 2018 08:03:13 +0000 (18:03 +1000)]
powerpc/xive: Remove (almost) unused macros

The GETFIELD and SETFIELD macros in xive-regs.h aren't used except for
a single instance of GETFIELD, so replace that and remove them.

These macros are also defined in vas.h, so either those should be
eventually replaced or the macros moved into bitops.h.

Signed-off-by: Russell Currey <ruscur@russell.cc>
[mpe: Rewrite the assignment to 'he' to avoid ffs() etc.]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agohvc_opal: don't set tb_ticks_per_usec in udbg_init_opal_common()
Stewart Smith [Thu, 29 Mar 2018 06:02:46 +0000 (17:02 +1100)]
hvc_opal: don't set tb_ticks_per_usec in udbg_init_opal_common()

time_init() will set up tb_ticks_per_usec based on reality.
time_init() is called *after* udbg_init_opal_common() during boot.

from arch/powerpc/kernel/time.c:
  unsigned long tb_ticks_per_usec = 100; /* sane default */

Currently, all powernv systems have a timebase frequency of 512mhz
(512000000/1000000 == 0x200) - although there's nothing written
down anywhere that I can find saying that we couldn't make that
different based on the requirements in the ISA.

So, we've been (accidentally) thwacking the (currently) correct
(for powernv at least) value for tb_ticks_per_usec earlier than
we otherwise would have.

The "sane default" seems to be adequate for our purposes between
udbg_init_opal_common() and time_init() being called, and if it isn't,
then we should probably be setting it somewhere that isn't hvc_opal.c!

Signed-off-by: Stewart Smith <stewart@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc: remove unused to_tm() helper
Arnd Bergmann [Mon, 23 Apr 2018 08:36:42 +0000 (10:36 +0200)]
powerpc: remove unused to_tm() helper

to_tm() is now completely unused, the only reference being in the
_dump_time() helper that is also unused. This removes both, leaving
the rest of the powerpc RTC code y2038 safe to as far as the hardware
supports.

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc: use time64_t in update_persistent_clock
Arnd Bergmann [Mon, 23 Apr 2018 08:36:41 +0000 (10:36 +0200)]
powerpc: use time64_t in update_persistent_clock

update_persistent_clock() is deprecated because it suffers from overflow
in 2038 on 32-bit architectures. This changes powerpc to use the
update_persistent_clock64() replacement, and to pass down 64-bit
timestamps consistently.

This is now simpler, as we no longer have to worry about the offset
numbers in tm_year and tm_mon that are different between the Linux
conventions and RTAS.

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc: use time64_t in read_persistent_clock
Arnd Bergmann [Mon, 23 Apr 2018 08:36:40 +0000 (10:36 +0200)]
powerpc: use time64_t in read_persistent_clock

Looking through the remaining users of the deprecated mktime()
function, I found the powerpc rtc handlers, which use it in
place of rtc_tm_to_time64().

To clean this up, I'm changing over the read_persistent_clock()
function to the read_persistent_clock64() variant, and change
all the platform specific handlers along with it.

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc: rtas: clean up time handling
Arnd Bergmann [Mon, 23 Apr 2018 08:36:39 +0000 (10:36 +0200)]
powerpc: rtas: clean up time handling

The to_tm() helper function operates on a signed integer for the time,
so it will suffer from overflow in 2038, even on 64-bit kernels.

Rather than fix that function, this replaces its use in the rtas
procfs implementation with the standard rtc_time64_to_tm() helper
that is very similar but is not affected by the overflow.

In order to actually support long times, the parser function gets
changed to 64-bit user input and output as well. Note that the tm_mon
and tm_year representation is slightly different, so we have to manually
add an offset here.

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc: always enable RTC_LIB
Arnd Bergmann [Mon, 23 Apr 2018 08:36:38 +0000 (10:36 +0200)]
powerpc: always enable RTC_LIB

In order to use the rtc_tm_to_time64() and rtc_time64_to_tm()
helper functions in later patches, we have to ensure that
CONFIG_RTC_LIB is always built-in.

Note that this symbol only controls a couple of helper functions,
not the actual RTC subsystem, which remains optional and is
enabled with CONFIG_RTC_CLASS.

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/pasemi: Set PCI_SCAN_ALL_PCI_DEVS
Olof Johansson [Wed, 6 Dec 2017 11:03:52 +0000 (12:03 +0100)]
powerpc/pasemi: Set PCI_SCAN_ALL_PCI_DEVS

Needed on Amiga X1000 with SB600.

Reported-by: Christian Zigotzky <chzigotzky@xenosoft.de>
Signed-off-by: Olof Johansson <olof@lixom.net>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/mm/hash: hard disable irq in the SLB insert path
Aneesh Kumar K.V [Fri, 1 Jun 2018 08:24:02 +0000 (13:54 +0530)]
powerpc/mm/hash: hard disable irq in the SLB insert path

When inserting SLB entries for EA above 512TB, we need to hard disable irq.
This will make sure we don't take a PMU interrupt that can possibly touch
user space address via a stack dump. To prevent this, we need to hard disable
the interrupt.

Also add a comment explaining why we don't need context synchronizing isync
with slbmte.

Fixes: f384796c4 ("powerpc/mm: Add support for handling > 512TB address in SLB miss")
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/mm/hugetlb: Update hugetlb related locks
Aneesh Kumar K.V [Fri, 1 Jun 2018 08:24:24 +0000 (13:54 +0530)]
powerpc/mm/hugetlb: Update hugetlb related locks

With split pmd page table lock enabled, we don't use mm->page_table_lock when
updating pmd entries. This patch update hugetlb path to use the right lock
when inserting huge page directory entries into page table.

ex: if we are using hugepd and inserting hugepd entry at the pmd level, we
use pmd_lockptr, which based on config can be split pmd lock.

For update huge page directory entries itself we use mm->page_table_lock. We
do have a helper huge_pte_lockptr() for that.

Fixes: 675d99529 ("powerpc/book3s64: Enable split pmd ptlock")
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/mm/hash: Add missing isync prior to kernel stack SLB switch
Aneesh Kumar K.V [Wed, 30 May 2018 13:18:04 +0000 (18:48 +0530)]
powerpc/mm/hash: Add missing isync prior to kernel stack SLB switch

Currently we do not have an isync, or any other context synchronizing
instruction prior to the slbie/slbmte in _switch() that updates the
SLB entry for the kernel stack.

However that is not correct as outlined in the ISA.

From Power ISA Version 3.0B, Book III, Chapter 11, page 1133:

  "Changing the contents of ... the contents of SLB entries ... can
   have the side effect of altering the context in which data
   addresses and instruction addresses are interpreted, and in which
   instructions are executed and data accesses are performed.
   ...
   These side effects need not occur in program order, and therefore
   may require explicit synchronization by software.
   ...
   The synchronizing instruction before the context-altering
   instruction ensures that all instructions up to and including that
   synchronizing instruction are fetched and executed in the context
   that existed before the alteration."

And page 1136:

  "For data accesses, the context synchronizing instruction before the
   slbie, slbieg, slbia, slbmte, tlbie, or tlbiel instruction ensures
   that all preceding instructions that access data storage have
   completed to a point at which they have reported all exceptions
   they will cause."

We're not aware of any bugs caused by this, but it should be fixed
regardless.

Add the missing isync when updating kernel stack SLB entry.

Cc: stable@vger.kernel.org
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
[mpe: Flesh out change log with more ISA text & explanation]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/64s: Fix compiler store ordering to SLB shadow area
Nicholas Piggin [Wed, 30 May 2018 10:31:22 +0000 (20:31 +1000)]
powerpc/64s: Fix compiler store ordering to SLB shadow area

The stores to update the SLB shadow area must be made as they appear
in the C code, so that the hypervisor does not see an entry with
mismatched vsid and esid. Use WRITE_ONCE for this.

GCC has been observed to elide the first store to esid in the update,
which means that if the hypervisor interrupts the guest after storing
to vsid, it could see an entry with old esid and new vsid, which may
possibly result in memory corruption.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/64s/radix: flush remote CPUs out of single-threaded mm_cpumask
Nicholas Piggin [Fri, 1 Jun 2018 10:01:21 +0000 (20:01 +1000)]
powerpc/64s/radix: flush remote CPUs out of single-threaded mm_cpumask

When a single-threaded process has a non-local mm_cpumask, try to use
that point to flush the TLBs out of other CPUs in the cpumask.

An IPI is used for clearing remote CPUs for a few reasons:
- An IPI can end lazy TLB use of the mm, which is required to prevent
  TLB entries being created on the remote CPU. The alternative is to
  drop lazy TLB switching completely, which costs 7.5% in a context
  switch ping-pong test betwee a process and kernel idle thread.
- An IPI can have remote CPUs flush the entire PID, but the local CPU
  can flush a specific VA. tlbie would require over-flushing of the
  local CPU (where the process is running).
- A single threaded process that is migrated to a different CPU is
  likely to have a relatively small mm_cpumask, so IPI is reasonable.

No other thread can concurrently switch to this mm, because it must
have been given a reference to mm_users by the current thread before it
can use_mm. mm_users can be asynchronously incremented (by
mm_activate or mmget_not_zero), but those users must use remote mm
access and can't use_mm or access user address space. Existing code
makes the this assumption already, for example sparc64 has reset
mm_cpumask using this condition since the start of history, see
arch/sparc/kernel/smp_64.c.

This reduces tlbies for a kernel compile workload from 0.90M to 0.12M,
tlbiels are increased significantly due to the PID flushing for the
cleaning up remote CPUs, and increased local flushes (PID flushes take
128 tlbiels vs 1 tlbie).

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/64s/radix: optimise pte_update
Nicholas Piggin [Fri, 1 Jun 2018 10:01:20 +0000 (20:01 +1000)]
powerpc/64s/radix: optimise pte_update

Implementing pte_update with pte_xchg (which uses cmpxchg) is
inefficient. A single larx/stcx. works fine, no need for the less
efficient cmpxchg sequence.

Then remove the memory barriers from the operation. There is a
requirement for TLB flushing to load mm_cpumask after the store
that reduces pte permissions, which is moved into the TLB flush
code.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/64s/radix: avoid ptesync after set_pte and ptep_set_access_flags
Nicholas Piggin [Fri, 1 Jun 2018 10:01:19 +0000 (20:01 +1000)]
powerpc/64s/radix: avoid ptesync after set_pte and ptep_set_access_flags

The ISA suggests ptesync after setting a pte, to prevent a table walk
initiated by a subsequent access from missing that store and causing a
spurious fault. This is an architectual allowance that allows an
implementation's page table walker to be incoherent with the store
queue.

However there is no correctness problem in taking a spurious fault in
userspace -- the kernel copes with these at any time, so the updated
pte will be found eventually. Spurious kernel faults on vmap memory
must be avoided, so a ptesync is put into flush_cache_vmap.

On POWER9 so far I have not found a measurable window where this can
result in more minor faults, so as an optimisation, remove the costly
ptesync from pte updates. If an implementation benefits from ptesync,
it would be better to add it back in update_mmu_cache, so it's not
done for things like fork(2).

fork --fork --exec benchmark improved 5.2% (12400->13100).

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/64s/radix: prefetch user address in update_mmu_cache
Nicholas Piggin [Fri, 1 Jun 2018 10:01:18 +0000 (20:01 +1000)]
powerpc/64s/radix: prefetch user address in update_mmu_cache

Prefetch the faulting address in update_mmu_cache to give the page
table walker perhaps 100 cycles head start as locks are dropped and
the interrupt completed.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/64s/radix: make ptep_get_and_clear_full non-atomic for the full case
Nicholas Piggin [Fri, 1 Jun 2018 10:01:17 +0000 (20:01 +1000)]
powerpc/64s/radix: make ptep_get_and_clear_full non-atomic for the full case

This matches other architectures, when we know there will be no
further accesses to the address (e.g., for teardown), page table
entries can be cleared non-atomically.

The comments about NMMU are bogus: all MMU notifiers (including NMMU)
are released at this point, with their TLBs flushed. An NMMU access at
this point would be a bug.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/64s/radix: do not flush TLB on spurious fault
Nicholas Piggin [Fri, 1 Jun 2018 10:01:16 +0000 (20:01 +1000)]
powerpc/64s/radix: do not flush TLB on spurious fault

In the case of a spurious fault (which can happen due to a race with
another thread that changes the page table), the default Linux mm code
calls flush_tlb_page for that address. This is not required because
the pte will be re-fetched. Hash does not wire this up to a hardware
TLB flush for this reason. This patch avoids the flush for radix.

>From Power ISA v3.0B, p.1090:

    Setting a Reference or Change Bit or Upgrading Access Authority
    (PTE Subject to Atomic Hardware Updates)

    If the only change being made to a valid PTE that is subject to
    atomic hardware updates is to set the Refer- ence or Change bit to
    1 or to add access authorities, a simpler sequence suffices
    because the translation hardware will refetch the PTE if an access
    is attempted for which the only problems were reference and/or
    change bits needing to be set or insufficient access authority.

The nest MMU on POWER9 does not re-fetch the PTE after such an access
attempt before faulting, so address spaces with a coprocessor
attached will continue to flush in these cases.

This reduces tlbies for a kernel compile workload from 0.95M to 0.90M.

fork --fork --exec benchmark improved 0.5% (12300->12400).

Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/64s/radix: do not flush TLB when relaxing access
Nicholas Piggin [Fri, 1 Jun 2018 10:01:15 +0000 (20:01 +1000)]
powerpc/64s/radix: do not flush TLB when relaxing access

Radix flushes the TLB when updating ptes to increase permissiveness
of protection (increase access authority). Book3S does not require
TLB flushing in this case, and it is not done on hash. This patch
avoids the flush for radix.

>From Power ISA v3.0B, p.1090:

    Setting a Reference or Change Bit or Upgrading Access Authority
    (PTE Subject to Atomic Hardware Updates)

    If the only change being made to a valid PTE that is subject to
    atomic hardware updates is to set the Reference or Change bit to 1
    or to add access authorities, a simpler sequence suffices because
    the translation hardware will refetch the PTE if an access is
    attempted for which the only problems were reference and/or change
    bits needing to be set or insufficient access authority.

The nest MMU on POWER9 does not re-fetch the PTE after such an access
attempt before faulting, so address spaces with a coprocessor
attached will continue to flush in these cases.

This reduces tlbies for a kernel compile workload from 1.28M to 0.95M,
tlbiels from 20.17M 19.68M.

fork --fork --exec benchmark improved 2.77% (12000->12300).

Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/mm/radix: Change pte relax sequence to handle nest MMU hang
Aneesh Kumar K.V [Tue, 29 May 2018 14:28:41 +0000 (19:58 +0530)]
powerpc/mm/radix: Change pte relax sequence to handle nest MMU hang

When relaxing access (read -> read_write update), pte needs to be marked invalid
to handle a nest MMU bug. We also need to do a tlb flush after the pte is
marked invalid before updating the pte with new access bits.

We also move tlb flush to platform specific __ptep_set_access_flags. This will
help us to gerid of unnecessary tlb flush on BOOK3S 64 later. We don't do that
in this patch. This also helps in avoiding multiple tlbies with coprocessor
attached.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/mm: Change function prototype
Aneesh Kumar K.V [Tue, 29 May 2018 14:28:40 +0000 (19:58 +0530)]
powerpc/mm: Change function prototype

In later patch, we use the vma and psize to do tlb flush. Do the prototype
update in separate patch to make the review easy.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/mm/radix: Move function from radix.h to pgtable-radix.c
Aneesh Kumar K.V [Tue, 29 May 2018 14:28:39 +0000 (19:58 +0530)]
powerpc/mm/radix: Move function from radix.h to pgtable-radix.c

In later patch we will update them which require them to be moved
to pgtable-radix.c. Keeping the function in radix.h results in
compile warning as below.

./arch/powerpc/include/asm/book3s/64/radix.h: In function â€˜radix__ptep_set_access_flags’:
./arch/powerpc/include/asm/book3s/64/radix.h:196:28: error: dereferencing pointer to incomplete type â€˜struct vm_area_struct’
  struct mm_struct *mm = vma->vm_mm;
                            ^~
./arch/powerpc/include/asm/book3s/64/radix.h:204:6: error: implicit declaration of function â€˜atomic_read’; did you mean â€˜__atomic_load’? [-Werror=implicit-function-declaration]
      atomic_read(&mm->context.copros) > 0) {
      ^~~~~~~~~~~
      __atomic_load
./arch/powerpc/include/asm/book3s/64/radix.h:204:21: error: dereferencing pointer to incomplete type â€˜struct mm_struct’
      atomic_read(&mm->context.copros) > 0) {

Instead of fixing header dependencies, we move the function to pgtable-radix.c
Also the function is now large to be a static inline . Doing the
move in separate patch helps in review.

No functional change in this patch. Only code movement.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/mm/hugetlb: Update huge_ptep_set_access_flags to call __ptep_set_access_flags...
Aneesh Kumar K.V [Tue, 29 May 2018 14:28:38 +0000 (19:58 +0530)]
powerpc/mm/hugetlb: Update huge_ptep_set_access_flags to call __ptep_set_access_flags directly

In a later patch, we want to update __ptep_set_access_flags take page size
arg. This makes ptep_set_access_flags only work with mmu_virtual_psize.
To simplify the code make huge_ptep_set_access_flags directly call
__ptep_set_access_flags so that we can compute the hugetlb page size in
hugetlb function.

Now that ptep_set_access_flags won't be called for hugetlb remove
the is_vm_hugetlb_page() check and add the assert of pte lock
unconditionally.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agoocxl: Document new OCXL IOCTLs
Alastair D'Silva [Fri, 11 May 2018 06:13:03 +0000 (16:13 +1000)]
ocxl: Document new OCXL IOCTLs

Signed-off-by: Alastair D'Silva <alastair@d-silva.org>
Acked-by: Frederic Barrat <fbarrat@linux.vnet.ibm.com>
Acked-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agoocxl: Add an IOCTL so userspace knows what OCXL features are available
Alastair D'Silva [Fri, 11 May 2018 06:13:02 +0000 (16:13 +1000)]
ocxl: Add an IOCTL so userspace knows what OCXL features are available

In order for a userspace AFU driver to call the POWER9 specific
OCXL_IOCTL_ENABLE_P9_WAIT, it needs to verify that it can actually
make that call.

Signed-off-by: Alastair D'Silva <alastair@d-silva.org>
Acked-by: Frederic Barrat <fbarrat@linux.vnet.ibm.com>
Acked-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agoocxl: Expose the thread_id needed for wait on POWER9
Alastair D'Silva [Fri, 11 May 2018 06:13:01 +0000 (16:13 +1000)]
ocxl: Expose the thread_id needed for wait on POWER9

In order to successfully issue as_notify, an AFU needs to know the TID
to notify, which in turn means that this information should be
available in userspace so it can be communicated to the AFU.

Signed-off-by: Alastair D'Silva <alastair@d-silva.org>
Acked-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agoocxl: Rename pnv_ocxl_spa_remove_pe to clarify it's action
Alastair D'Silva [Fri, 11 May 2018 06:13:00 +0000 (16:13 +1000)]
ocxl: Rename pnv_ocxl_spa_remove_pe to clarify it's action

The function removes the process element from NPU cache.

Signed-off-by: Alastair D'Silva <alastair@d-silva.org>
Acked-by: Frederic Barrat <fbarrat@linux.vnet.ibm.com>
Acked-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc: use task_pid_nr() for TID allocation
Alastair D'Silva [Fri, 11 May 2018 06:12:59 +0000 (16:12 +1000)]
powerpc: use task_pid_nr() for TID allocation

The current implementation of TID allocation, using a global IDR, may
result in an errant process starving the system of available TIDs.
Instead, use task_pid_nr(), as mentioned by the original author. The
scenario described which prevented it's use is not applicable, as
set_thread_tidr can only be called after the task struct has been
populated.

In the unlikely event that 2 threads share the TID and are waiting,
all potential outcomes have been determined safe.

Signed-off-by: Alastair D'Silva <alastair@d-silva.org>
Reviewed-by: Frederic Barrat <fbarrat@linux.vnet.ibm.com>
Reviewed-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc: Use TIDR CPU feature to control TIDR allocation
Alastair D'Silva [Fri, 11 May 2018 06:12:58 +0000 (16:12 +1000)]
powerpc: Use TIDR CPU feature to control TIDR allocation

Switch the use of TIDR on it's CPU feature, rather than assuming it
is available based on architecture.

Signed-off-by: Alastair D'Silva <alastair@d-silva.org>
Reviewed-by: Frederic Barrat <fbarrat@linux.vnet.ibm.com>
Reviewed-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc: Add TIDR CPU feature for POWER9
Alastair D'Silva [Fri, 11 May 2018 06:12:57 +0000 (16:12 +1000)]
powerpc: Add TIDR CPU feature for POWER9

This patch adds a CPU feature bit to show whether the CPU has
the TIDR register available, enabling as_notify/wait in userspace.

Signed-off-by: Alastair D'Silva <alastair@d-silva.org>
Reviewed-by: Frederic Barrat <fbarrat@linux.vnet.ibm.com>
Reviewed-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/powernv: process all OPAL event interrupts with kopald
Nicholas Piggin [Thu, 10 May 2018 17:20:05 +0000 (03:20 +1000)]
powerpc/powernv: process all OPAL event interrupts with kopald

Using irq_work for processing OPAL event interrupts is not necessary.
irq_work is typically used to schedule work from NMI context, a
softirq may be more appropriate. However OPAL events are not
particularly performance or latency critical, so they can all be
invoked by kopald.

This patch removes the irq_work queueing, and instead wakes up
kopald when there is an event to be processed. kopald processes
interrupts individually, enabling irqs and calling cond_resched
between each one to minimise latencies.

Event handlers themselves should still use threaded handlers,
workqueues, etc. as necessary to avoid high interrupts-off latencies
within any single interrupt.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/powernv: call OPAL_QUIESCE before OPAL_SIGNAL_SYSTEM_RESET
Nicholas Piggin [Thu, 10 May 2018 12:21:48 +0000 (22:21 +1000)]
powerpc/powernv: call OPAL_QUIESCE before OPAL_SIGNAL_SYSTEM_RESET

Although it is often possible to recover a CPU that was interrupted
from OPAL with a system reset NMI, it's undesirable to interrupt them
for a few reasons. Firstly because dump/debug code itself needs to
call firmware, so it could hang on a lock or possibly corrupt a
per-cpu data structure if it or another CPU was interrupted from
OPAL. Secondly, the kexec crash dump code will not return from
interrupt to unwind the OPAL call.

Call OPAL_QUIESCE with QUIESCE_HOLD before sending an NMI IPI to
another CPU, which wait for it to leave firmware (or time out) to
avoid this problem in normal conditions. Firmware bugs may still
result in a timeout and interrupting OPAL, but that is the best
option (stops the CPU, and possibly allows firmware to be debugged).

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/64: change softe to irqmask in show_regs and xmon
Nicholas Piggin [Thu, 10 May 2018 01:04:24 +0000 (11:04 +1000)]
powerpc/64: change softe to irqmask in show_regs and xmon

When the soft enabled flag was changed to a soft disable mask, xmon
and register dump code was not updated to reflect that, which is
confusing ('SOFTE: 1' previously meant interrupts were soft enabled,
currently it means the opposite, the general interrupt type has been
disabled).

Fix this by using the name irqmask, and printing it in hex.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Acked-by: Balbir Singh <bsingharora@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/pmu/fsl: fix is_nmi test for irq mask change
Nicholas Piggin [Thu, 10 May 2018 01:04:23 +0000 (11:04 +1000)]
powerpc/pmu/fsl: fix is_nmi test for irq mask change

When soft enabled was changed to irq disabled mask, this test missed
being converted (although the equivalent book3s test was converted).

The PMU drivers consider it an NMI when they take a PMI while general
interrupts are disabled. This change restores that behaviour.

Fixes: 01417c6cc7 ("powerpc/64: Change soft_enabled from flag to bitmask")
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/time: account broadcast timer event interrupts separately
Nicholas Piggin [Fri, 4 May 2018 17:19:35 +0000 (03:19 +1000)]
powerpc/time: account broadcast timer event interrupts separately

These are not local timer interrupts but IPIs. It's good to be able
to see how timer offloading is behaving, so split these out into
their own category.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc: move a stray NMI IPI case under NMI_IPI ifdef
Nicholas Piggin [Fri, 4 May 2018 17:19:34 +0000 (03:19 +1000)]
powerpc: move a stray NMI IPI case under NMI_IPI ifdef

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc: move timer broadcast code under GENERIC_CLOCKEVENTS_BROADCAST ifdef
Nicholas Piggin [Fri, 4 May 2018 17:19:33 +0000 (03:19 +1000)]
powerpc: move timer broadcast code under GENERIC_CLOCKEVENTS_BROADCAST ifdef

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc: allow soft-NMI watchdog to cover timer interrupts with large decrementers
Nicholas Piggin [Fri, 4 May 2018 17:19:32 +0000 (03:19 +1000)]
powerpc: allow soft-NMI watchdog to cover timer interrupts with large decrementers

Large decrementers (e.g., POWER9) can take a very long time to wrap,
so when the timer iterrupt handler sets the decrementer to max so as
to avoid taking another decrementer interrupt when hard enabling
interrupts before running timers, it effectively disables the soft
NMI coverage for timer interrupts.

Fix this by using the traditional 31-bit value instead, which wraps
after a few seconds. masked interrupt code does the same thing, and
in normal operation neither of these paths would ever wrap even the
31 bit value.

Note: the SMP watchdog should catch timer interrupt lockups, but it
is preferable for the local soft-NMI to catch them, mainly to avoid
the IPI.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc: generic clockevents broadcast receiver call tick_receive_broadcast
Nicholas Piggin [Fri, 4 May 2018 17:19:31 +0000 (03:19 +1000)]
powerpc: generic clockevents broadcast receiver call tick_receive_broadcast

The broadcast tick recipient can call tick_receive_broadcast rather
than re-running the full timer interrupt.

It does not have to check for the next event time, because the sender
already determined the timer has expired. It does not have to test
irq_work_pending, because that's a direct decrementer interrupt and
does not go through the clock events subsystem. And it does not have
to read PURR because that was removed with the previous patch.

This results in no code size change, but both the decrementer and
broadcast path lengths are reduced.

Cc: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
Cc: Preeti U Murthy <preeti@linux.vnet.ibm.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/pseries: lparcfg calculate PURR on demand
Nicholas Piggin [Fri, 4 May 2018 17:19:30 +0000 (03:19 +1000)]
powerpc/pseries: lparcfg calculate PURR on demand

For SPLPAR, lparcfg provides a sum of PURR registers for all CPUs.
Currently this is done by reading PURR in context switch and timer
interrupt, and storing that into a per-CPU variable. These are summed
to provide the value.

This does not work with all timer schemes (e.g., NO_HZ_FULL), and it
is sub-optimal for performance because it reads the PURR register on
every context switch, although that's been difficult to distinguish
from noise in the contxt_switch microbenchmark.

This patch implements the sum by calling a function on each CPU, to
read and add PURR values of each CPU.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/64: remove start_tb and accum_tb from thread_struct
Nicholas Piggin [Fri, 4 May 2018 17:19:29 +0000 (03:19 +1000)]
powerpc/64: remove start_tb and accum_tb from thread_struct

These fields are only written to.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/64s: micro-optimise __hard_irq_enable() for mtmsrd L=1 support
Nicholas Piggin [Fri, 4 May 2018 17:19:28 +0000 (03:19 +1000)]
powerpc/64s: micro-optimise __hard_irq_enable() for mtmsrd L=1 support

Book3S minimum supported ISA version now requires mtmsrd L=1. This
instruction does not require bits other than RI and EE to be supplied,
so __hard_irq_enable() and __hard_irq_disable() does not have to read
the kernel_msr from paca.

Interrupt entry code already relies on L=1 support.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/pseries: put cede MSR[EE] check under IRQ_SOFT_MASK_DEBUG
Nicholas Piggin [Fri, 4 May 2018 17:19:26 +0000 (03:19 +1000)]
powerpc/pseries: put cede MSR[EE] check under IRQ_SOFT_MASK_DEBUG

This check does not catch IRQ soft mask bugs, but this option is
slightly more suitable than TRACE_IRQFLAGS.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/64: irq_work avoid interrupt when called with hardware irqs enabled
Nicholas Piggin [Fri, 4 May 2018 17:19:25 +0000 (03:19 +1000)]
powerpc/64: irq_work avoid interrupt when called with hardware irqs enabled

irq_work_raise should not cause a decrementer exception unless it is
called from NMI context. Doing so often just results in an immediate
masked decrementer interrupt:

   <...>-550    90d...    4us : update_curr_rt <-dequeue_task_rt
   <...>-550    90d...    5us : dbs_update_util_handler <-update_curr_rt
   <...>-550    90d...    6us : arch_irq_work_raise <-irq_work_queue
   <...>-550    90d...    7us : soft_nmi_interrupt <-soft_nmi_common
   <...>-550    90d...    7us : printk_nmi_enter <-soft_nmi_interrupt
   <...>-550    90d.Z.    8us : rcu_nmi_enter <-soft_nmi_interrupt
   <...>-550    90d.Z.    9us : rcu_nmi_exit <-soft_nmi_interrupt
   <...>-550    90d...    9us : printk_nmi_exit <-soft_nmi_interrupt
   <...>-550    90d...   10us : cpuacct_charge <-update_curr_rt

The soft_nmi_interrupt here is the call into the watchdog, due to the
decrementer interrupt firing with irqs soft-disabled. This is
harmless, but sub-optimal.

When it's not called from NMI context or with interrupts enabled, mark
the decrementer pending in the irq_happened mask directly, rather than
having the masked decrementer interupt handler do it. This will be
replayed at the next local_irq_enable. See the comment for details.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/powernv/ioda2: Remove redundant free of TCE pages
Alexey Kardashevskiy [Wed, 30 May 2018 09:22:50 +0000 (19:22 +1000)]
powerpc/powernv/ioda2: Remove redundant free of TCE pages

When IODA2 creates a PE, it creates an IOMMU table with it_ops::free
set to pnv_ioda2_table_free() which calls pnv_pci_ioda2_table_free_pages().

Since iommu_tce_table_put() calls it_ops::free when the last reference
to the table is released, explicit call to pnv_pci_ioda2_table_free_pages()
is not needed so let's remove it.

This should fix double free in the case of PCI hotuplug as
pnv_pci_ioda2_table_free_pages() does not reset neither
iommu_table::it_base nor ::it_size.

This was not exposed by SRIOV as it uses different code path via
pnv_pcibios_sriov_disable().

IODA1 does not inialize it_ops::free so it does not have this issue.

Fixes: c5f7700bbd2e ("powerpc/powernv: Dynamically release PE")
Cc: stable@vger.kernel.org # v4.8+
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/xmon: use match_string() helper
Yisheng Xie [Thu, 31 May 2018 11:11:25 +0000 (19:11 +0800)]
powerpc/xmon: use match_string() helper

match_string() returns the index of an array for a matching string,
which can be used instead of open coded variant.

Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: linuxppc-dev@lists.ozlabs.org
Signed-off-by: Yisheng Xie <xieyisheng1@huawei.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc: Fix build by disabling attribute-alias warning for SYSCALL_DEFINEx
Christophe Leroy [Tue, 29 May 2018 16:06:41 +0000 (16:06 +0000)]
powerpc: Fix build by disabling attribute-alias warning for SYSCALL_DEFINEx

GCC 8.1 emits warnings such as the following. As arch/powerpc code is
built with -Werror, this breaks the build with GCC 8.1.

  In file included from arch/powerpc/kernel/pci_64.c:23:
  ./include/linux/syscalls.h:233:18: error: 'sys_pciconfig_iobase' alias
  between functions of incompatible types 'long int(long int, long
  unsigned int, long unsigned int)' and 'long int(long int, long int,
  long int)' [-Werror=attribute-alias]
    asmlinkage long sys##name(__MAP(x,__SC_DECL,__VA_ARGS__)) \
                    ^~~
  ./include/linux/syscalls.h:222:2: note: in expansion of macro '__SYSCALL_DEFINEx'
    __SYSCALL_DEFINEx(x, sname, __VA_ARGS__)

This patch inhibits those warnings.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
[mpe: Trim change log]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/64: Fix strncpy() related build failures with GCC 8.1
Christophe Leroy [Tue, 29 May 2018 06:03:53 +0000 (06:03 +0000)]
powerpc/64: Fix strncpy() related build failures with GCC 8.1

GCC 8.1 warns about possible string truncation:

  arch/powerpc/kernel/nvram_64.c:1042:2: error: 'strncpy' specified
  bound 12 equals destination size [-Werror=stringop-truncation]
    strncpy(new_part->header.name, name, 12);

  arch/powerpc/platforms/ps3/repository.c:106:2: error: 'strncpy'
  output truncated before terminating nul copying 8 bytes from a
  string of the same length [-Werror=stringop-truncation]
    strncpy((char *)&n, text, 8);

Fix it by using memcpy(). To make that safe we need to ensure the
destination is pre-zeroed. Use kzalloc() in the nvram code and
initialise the u64 to zero in the ps3 code.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
[mpe: Use kzalloc() in the nvram code, flesh out change log]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agoMerge branch 'topic/pkey' into next
Michael Ellerman [Sun, 3 Jun 2018 10:35:27 +0000 (20:35 +1000)]
Merge branch 'topic/pkey' into next

This is a branch with a mixture of mm, x86 and powerpc commits all
relating to some minor cross-arch pkeys consolidation. The x86/mm
changes have been reviewed by Ingo & Dave Hansen and the tree has been
in linux-next for some weeks without issue.

6 years agoMerge branch 'fixes' into next
Michael Ellerman [Sun, 3 Jun 2018 10:32:02 +0000 (20:32 +1000)]
Merge branch 'fixes' into next

We ended up with an ugly conflict between fixes and next in ftrace.h
involving multiple nested ifdefs, and the automatic resolution is
wrong. So merge fixes into next so we can fix it up.

6 years agoMerge branch 'topic/kbuild' into next
Michael Ellerman [Sun, 3 Jun 2018 10:24:15 +0000 (20:24 +1000)]
Merge branch 'topic/kbuild' into next

Merge in some commits we're sharing with the kbuild tree.

6 years agoMerge branch 'topic/ppc-kvm' into next
Michael Ellerman [Sun, 3 Jun 2018 10:23:54 +0000 (20:23 +1000)]
Merge branch 'topic/ppc-kvm' into next

Merge in some commits we're sharing with the kvm-ppc tree.

6 years agopowerpc/mm: Fix kernel crash on page table free
Aneesh Kumar K.V [Wed, 30 May 2018 12:32:25 +0000 (18:02 +0530)]
powerpc/mm: Fix kernel crash on page table free

Fix the below crash on Book3E 64. pgtable_page_dtor expects struct
page *arg.

Also call the destructor on non book3s platforms correctly. This frees
up the split PTL locks correctly if we had allocated them before.

Call Trace:
  .kmem_cache_free+0x9c/0x44c (unreliable)
  .ptlock_free+0x1c/0x30
  .tlb_remove_table+0xdc/0x224
  .free_pgd_range+0x298/0x500
  .shift_arg_pages+0x10c/0x1e0
  .setup_arg_pages+0x200/0x25c
  .load_elf_binary+0x450/0x16c8
  .search_binary_handler.part.11+0x9c/0x248
  .do_execveat_common.isra.13+0x868/0xc18
  .run_init_process+0x34/0x4c
  .try_to_run_init_process+0x1c/0x68
  .kernel_init+0xdc/0x130
  .ret_from_kernel_thread+0x58/0x7c

Fixes: 702346768 ("powerpc/mm/nohash: Remove pte fragment dependency from nohash")
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/prom: Fix %u/%llx usage since prom_printf() change
Mathieu Malaterre [Tue, 29 May 2018 19:20:01 +0000 (21:20 +0200)]
powerpc/prom: Fix %u/%llx usage since prom_printf() change

In commit eae5f709a4d7 ("powerpc: Add __printf verification to
prom_printf") __printf attribute was added to prom_printf(), which
means GCC started warning about type/format mismatches. As part of
that commit we changed some "%lx" formats to "%llx" where the type is
actually unsigned long long.

Unfortunately prom_printf() doesn't know how to print "%llx", it just
prints a literal "lx", eg:

  reserved memory map:
    lx - lx
    lx - lx

prom_printf() also doesn't know how to print "%u" (only "%lu"), it
just prints a literal "u", eg:

  Max number of cores passed to firmware: u (NR_CPUS = 2048)

Instead of:

  Max number of cores passed to firmware: 2048 (NR_CPUS = 2048)

This commit adds support for the missing formatters.

Fixes: eae5f709a4d7 ("powerpc: Add __printf verification to prom_printf")
Reported-by: Michael Ellerman <mpe@ellerman.id.au>
Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Mathieu Malaterre <malat@debian.org>
Tested-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agocxl: Configure PSL to not use APC virtual machines
Vaibhav Jain [Tue, 17 Apr 2018 05:11:02 +0000 (10:41 +0530)]
cxl: Configure PSL to not use APC virtual machines

APC virtual machines arent used on POWER-9 chips and are already
disabled in on-chip CAPP. They also need to be disabled on the PSL via
'PSL Data Send Control Register' by setting bit(47). This forces the
PSL to send commands to CAPP with queue.id == 0.

Fixes: 5632874311db ("cxl: Add support for POWER9 DD2")
Cc: stable@vger.kernel.org # v4.15+
Signed-off-by: Vaibhav Jain <vaibhav@linux.vnet.ibm.com>
Acked-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com>
Reviewed-by: Alastair D'Silva <alastair@d-silva.org>
Reviewed-by: Christophe Lombard <clombard@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agocxl: Disable prefault_mode in Radix mode
Vaibhav Jain [Fri, 18 May 2018 09:42:23 +0000 (15:12 +0530)]
cxl: Disable prefault_mode in Radix mode

Currently we see a kernel-oops reported on Power-9 while attaching a
context to an AFU, with radix-mode and sysfs attr 'prefault_mode' set
to anything other than 'none'. The backtrace of the oops is of this
form:

  Unable to handle kernel paging request for data at address 0x00000080
  Faulting instruction address: 0xc00800000bcf3b20
  cpu 0x1: Vector: 300 (Data Access) at [c00000037f003800]
      pc: c00800000bcf3b20: cxl_load_segment+0x178/0x290 [cxl]
      lr: c00800000bcf39f0: cxl_load_segment+0x48/0x290 [cxl]
      sp: c00000037f003a80
     msr: 9000000000009033
     dar: 80
   dsisr: 40000000
    current = 0xc00000037f280000
    paca    = 0xc0000003ffffe600   softe: 3        irq_happened: 0x01
      pid   = 3529, comm = afp_no_int
  <snip>
  cxl_prefault+0xfc/0x248 [cxl]
  process_element_entry_psl9+0xd8/0x1a0 [cxl]
  cxl_attach_dedicated_process_psl9+0x44/0x130 [cxl]
  native_attach_process+0xc0/0x130 [cxl]
  afu_ioctl+0x3f4/0x5e0 [cxl]
  do_vfs_ioctl+0xdc/0x890
  ksys_ioctl+0x68/0xf0
  sys_ioctl+0x40/0xa0
  system_call+0x58/0x6c

The issue is caused as on Power-8 the AFU attr 'prefault_mode' was
used to improve initial storage fault performance by prefaulting
process segments. However on Power-9 with radix mode we don't have
Storage-Segments that we can prefault. Also prefaulting process Pages
will be too costly and fine-grained.

Hence, since the prefaulting mechanism doesn't makes sense of
radix-mode, this patch updates prefault_mode_store() to not allow any
other value apart from CXL_PREFAULT_NONE when radix mode is enabled.

Fixes: f24be42aab37 ("cxl: Add psl9 specific code")
Cc: stable@vger.kernel.org # v4.12+
Signed-off-by: Vaibhav Jain <vaibhav@linux.ibm.com>
Acked-by: Frederic Barrat <fbarrat@linux.vnet.ibm.com>
Acked-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
6 years agopowerpc/kbuild: Use flags variables rather than overriding LD/CC/AS
Nicholas Piggin [Wed, 30 May 2018 12:19:21 +0000 (22:19 +1000)]
powerpc/kbuild: Use flags variables rather than overriding LD/CC/AS

The powerpc toolchain can compile combinations of 32/64 bit and
big/little endian, so it's convenient to consider, e.g.,

  `CC -m64 -mbig-endian`

To be the C compiler for the purpose of invoking it to build target
artifacts. So overriding the CC variable to include these flags works
for this purpose.

Unfortunately that is not compatible with the way the proposed new
Kconfig macro language will work.

After previous patches in this series, these flags can be carefully
passed in using flags instead.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>