platform/kernel/linux-rpi.git
2 years agopowerpc/bpf: Rename PPC_BL_ABS() to PPC_BL()
Naveen N. Rao [Mon, 14 Feb 2022 10:41:44 +0000 (16:11 +0530)]
powerpc/bpf: Rename PPC_BL_ABS() to PPC_BL()

PPC_BL_ABS() is just doing a relative branch with link. The name
suggests that it is for branching to an absolute address, which is
incorrect. Rename the macro to a more appropriate PPC_BL().

Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/f0e57b6c7a6ee40dba645535b70da46f46e8af5e.1644834730.git.naveen.n.rao@linux.vnet.ibm.com
2 years agopowerpc64/bpf: Optimize instruction sequence used for function calls
Naveen N. Rao [Mon, 14 Feb 2022 10:41:43 +0000 (16:11 +0530)]
powerpc64/bpf: Optimize instruction sequence used for function calls

When calling BPF helpers, we load the function address to call into a
register. This can result in upto 5 instructions. Optimize this by
instead using the kernel toc in r2 and adjusting offset to the BPF
helper. This works since all BPF helpers are part of kernel text, and
all BPF programs/functions utilize the kernel TOC.

Further more:
- load the actual function entry address in elf v1, rather than loading
  it through the function descriptor address.
- load the Local Entry Point (LEP) in elf v2 skipping TOC setup.
- consolidate code across elf abi v1 and v2 by using r12 on both.

Reported-by: Anton Blanchard <anton@ozlabs.org>
Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/1233c7544e60dcb021c52b1f840b0f21a87b33ed.1644834730.git.naveen.n.rao@linux.vnet.ibm.com
2 years agopowerpc64/bpf elfv1: Do not load TOC before calling functions
Naveen N. Rao [Mon, 14 Feb 2022 10:41:42 +0000 (16:11 +0530)]
powerpc64/bpf elfv1: Do not load TOC before calling functions

BPF helpers always reside in core kernel and all BPF programs use the
kernel TOC. As such, there is no need to load the TOC before calling
helpers or other BPF functions. Drop code to do the same.

Add a check to ensure we don't proceed if this assumption ever changes
in future.

Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/a3cd3da4d24d95d845cd10382b1af083600c9074.1644834730.git.naveen.n.rao@linux.vnet.ibm.com
2 years agopowerpc64/bpf elfv2: Setup kernel TOC in r2 on entry
Naveen N. Rao [Mon, 14 Feb 2022 10:41:41 +0000 (16:11 +0530)]
powerpc64/bpf elfv2: Setup kernel TOC in r2 on entry

In preparation for using kernel TOC, load the same in r2 on entry. With
elfv1, the kernel TOC is already setup by our caller.

We adjust the number of instructions to skip on a tail call accordingly.
We get rid of the #ifdef in bpf_jit_emit_tail_call() since
FUNCTION_DESCR_SIZE is itself under a #ifdef.

Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/18a05a4ceec14a8617c9dd4b7128d0afa83fd14e.1644834730.git.naveen.n.rao@linux.vnet.ibm.com
2 years agopowerpc64: Set PPC64_ELF_ABI_v[1|2] macros to 1
Naveen N. Rao [Mon, 14 Feb 2022 10:41:40 +0000 (16:11 +0530)]
powerpc64: Set PPC64_ELF_ABI_v[1|2] macros to 1

Set macros to 1 so that they can be used with __is_defined().

Suggested-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/abad4868416ddfd42893f99c0cad8e5faf998095.1644834730.git.naveen.n.rao@linux.vnet.ibm.com
2 years agopowerpc64/bpf: Use r12 for constant blinding
Naveen N. Rao [Mon, 14 Feb 2022 10:41:39 +0000 (16:11 +0530)]
powerpc64/bpf: Use r12 for constant blinding

In preparation for preserving kernel toc in r2, switch BPF_REG_AX from
r2 to r12. r12 is not used by bpf JIT except during external helper/bpf
calls, or with BPF_NOSPEC. These sequences aren't emitted when
BPF_REG_AX is used for constant blinding and other purposes.

Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/e109f98617eacb4512c17a48525e94eda42889e6.1644834730.git.naveen.n.rao@linux.vnet.ibm.com
2 years agopowerpc64/bpf: Do not save/restore LR on each call to bpf_stf_barrier()
Naveen N. Rao [Mon, 14 Feb 2022 10:41:38 +0000 (16:11 +0530)]
powerpc64/bpf: Do not save/restore LR on each call to bpf_stf_barrier()

Instead of saving and restoring LR before each invocation to
bpf_stf_barrier(), set SEEN_FUNC flag so that we save/restore LR in
prologue/epilogue.

Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/4446f25478d82a2a4ac9dab2ebdfd88ddf923eb7.1644834730.git.naveen.n.rao@linux.vnet.ibm.com
2 years agopowerpc/bpf: Handle large branch ranges with BPF_EXIT
Naveen N. Rao [Mon, 14 Feb 2022 10:41:37 +0000 (16:11 +0530)]
powerpc/bpf: Handle large branch ranges with BPF_EXIT

In some scenarios, it is possible that the program epilogue is outside
the branch range for a BPF_EXIT instruction. Instead of rejecting such
programs, emit epilogue as an alternate exit point from the program.
Track the location of the same so that subsequent exits can take either
of the two paths.

Reported-by: Jordan Niethe <jniethe5@gmail.com>
Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/33aa2e92645a92712be23b18035a2c6dcb92ff8d.1644834730.git.naveen.n.rao@linux.vnet.ibm.com
2 years agopowerpc/bpf: Emit a single branch instruction for known short branch ranges
Naveen N. Rao [Mon, 14 Feb 2022 10:41:36 +0000 (16:11 +0530)]
powerpc/bpf: Emit a single branch instruction for known short branch ranges

PPC_BCC() emits two instructions to accommodate scenarios where we need
to branch outside the range of a conditional branch. PPC_BCC_SHORT()
emits a single branch instruction and can be used when the branch is
known to be within a conditional branch range.

Convert some of the uses of PPC_BCC() in the powerpc BPF JIT over to
PPC_BCC_SHORT() where we know the branch range.

Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/edbca01377d1d5f472868bf6d8962b0a0d85b96f.1644834730.git.naveen.n.rao@linux.vnet.ibm.com
2 years agopowerpc/bpf: Skip branch range validation during first pass
Naveen N. Rao [Mon, 14 Feb 2022 10:41:35 +0000 (16:11 +0530)]
powerpc/bpf: Skip branch range validation during first pass

During the first pass, addrs[] is still being populated. So, all
branches to following instructions will appear to be going to the start
of the JIT program. Ignore branch range validation for such instructions
and assume those to be in range. Branch range validation will happen
during the second pass after addrs[] is setup properly.

Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/bc517413d11636e20dbfc88503dad14bcbe391e2.1644834730.git.naveen.n.rao@linux.vnet.ibm.com
2 years agopowerpc/code-patching: Pre-map patch area
Michael Ellerman [Wed, 23 Feb 2022 01:58:21 +0000 (12:58 +1100)]
powerpc/code-patching: Pre-map patch area

Paul reported a warning with DEBUG_ATOMIC_SLEEP=y:

  BUG: sleeping function called from invalid context at include/linux/sched/mm.h:256
  in_atomic(): 0, irqs_disabled(): 1, non_block: 0, pid: 1, name: swapper/0
  preempt_count: 0, expected: 0
  ...
  Call Trace:
    dump_stack_lvl+0xa0/0xec (unreliable)
    __might_resched+0x2f4/0x310
    kmem_cache_alloc+0x220/0x4b0
    __pud_alloc+0x74/0x1d0
    hash__map_kernel_page+0x2cc/0x390
    do_patch_instruction+0x134/0x4a0
    arch_jump_label_transform+0x64/0x78
    __jump_label_update+0x148/0x180
    static_key_enable_cpuslocked+0xd0/0x120
    static_key_enable+0x30/0x50
    check_kvm_guest+0x60/0x88
    pSeries_smp_probe+0x54/0xb0
    smp_prepare_cpus+0x3e0/0x430
    kernel_init_freeable+0x20c/0x43c
    kernel_init+0x30/0x1a0
    ret_from_kernel_thread+0x5c/0x64

Peter pointed out that this is because do_patch_instruction() has
disabled interrupts, but then map_patch_area() calls map_kernel_page()
then hash__map_kernel_page() which does a sleeping memory allocation.

We only see the warning in KVM guests with SMT enabled, which is not
particularly common, or on other platforms if CONFIG_KPROBES is
disabled, also not common. The reason we don't see it in most
configurations is that another path that happens to have interrupts
enabled has allocated the required page tables for us, eg. there's a
path in kprobes init that does that. That's just pure luck though.

As Christophe suggested, the simplest solution is to do a dummy
map/unmap when we initialise the patching, so that any required page
table levels are pre-allocated before the first call to
do_patch_instruction(). This works because the unmap doesn't free any
page tables that were allocated by the map, it just clears the PTE,
leaving the page table levels there for the next map.

Reported-by: Paul Menzel <pmenzel@molgen.mpg.de>
Debugged-by: Peter Zijlstra <peterz@infradead.org>
Suggested-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220223015821.473097-1-mpe@ellerman.id.au
2 years agopowerpc/64s: Don't use DSISR for SLB faults
Michael Ellerman [Tue, 22 Feb 2022 11:34:49 +0000 (22:34 +1100)]
powerpc/64s: Don't use DSISR for SLB faults

Since commit 46ddcb3950a2 ("powerpc/mm: Show if a bad page fault on data
is read or write.") we use page_fault_is_write(regs->dsisr) in
__bad_page_fault() to determine if the fault is for a read or write, and
change the message printed accordingly.

But SLB faults, aka Data Segment Interrupts, don't set DSISR (Data
Storage Interrupt Status Register) to a useful value. All ISA versions
from v2.03 through v3.1 specify that the Data Segment Interrupt sets
DSISR "to an undefined value". As far as I can see there's no mention of
SLB faults setting DSISR in any BookIV content either.

This manifests as accesses that should be a read being incorrectly
reported as writes, for example, using the xmon "dump" command:

  0:mon> d 0x5deadbeef0000000
  5deadbeef0000000
  [359526.415354][    C6] BUG: Unable to handle kernel data access on write at 0x5deadbeef0000000
  [359526.415611][    C6] Faulting instruction address: 0xc00000000010a300
  cpu 0x6: Vector: 380 (Data SLB Access) at [c00000000ffbf400]
      pc: c00000000010a300: mread+0x90/0x190

If we disassemble the PC, we see a load instruction:

  0:mon> di c00000000010a300
  c00000000010a300 89490000      lbz     r10,0(r9)

We can also see in exceptions-64s.S that the data_access_slb block
doesn't set IDSISR=1, which means it doesn't load DSISR into pt_regs. So
the value we're using to determine if the fault is a read/write is some
stale value in pt_regs from a previous page fault.

Rework the printing logic to separate the SLB fault case out, and only
print read/write in the cases where we can determine it.

The result looks like eg:

  0:mon> d 0x5deadbeef0000000
  5deadbeef0000000
  [  721.779525][    C6] BUG: Unable to handle kernel data access at 0x5deadbeef0000000
  [  721.779697][    C6] Faulting instruction address: 0xc00000000014cbe0
  cpu 0x6: Vector: 380 (Data SLB Access) at [c00000000ffbf390]

  0:mon> d 0
  0000000000000000
  [  742.793242][    C6] BUG: Kernel NULL pointer dereference at 0x00000000
  [  742.793316][    C6] Faulting instruction address: 0xc00000000014cbe0
  cpu 0x6: Vector: 380 (Data SLB Access) at [c00000000ffbf390]

Fixes: 46ddcb3950a2 ("powerpc/mm: Show if a bad page fault on data is read or write.")
Reported-by: Nageswara R Sastry <rnsastry@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Link: https://lore.kernel.org/r/20220222113449.319193-1-mpe@ellerman.id.au
2 years agopowerpc/sysdev: fix incorrect use to determine if list is empty
Jakob Koschel [Mon, 28 Feb 2022 14:24:33 +0000 (15:24 +0100)]
powerpc/sysdev: fix incorrect use to determine if list is empty

'gtm' will *always* be set by list_for_each_entry().
It is incorrect to assume that the iterator value will be NULL if the
list is empty.

Instead of checking the pointer it should be checked if
the list is empty.

Fixes: 83ff9dcf375c ("powerpc/sysdev: implement FSL GTM support")
Signed-off-by: Jakob Koschel <jakobkoschel@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220228142434.576226-1-jakobkoschel@gmail.com
2 years agopowerpc/pseries/vas: Add VAS migration handler
Haren Myneni [Wed, 2 Mar 2022 08:51:58 +0000 (00:51 -0800)]
powerpc/pseries/vas: Add VAS migration handler

Since the VAS windows belong to the VAS hardware resource, the
hypervisor expects the partition to close them on source partition
and reopen them after the partition migrated on the destination
machine.

This handler is called before pseries_suspend() to close these
windows and again invoked after migration. All active windows
for both default and QoS types will be closed and mark them
inactive and reopened after migration with this handler.
During the migration, the user space receives paste instruction
failure if it issues copy/paste on these inactive windows.

The current migration implementation does not freeze the user
space and applications can continue to open VAS windows while
migration is in progress. So when the migration_in_progress flag
is set, VAS open window API returns -EBUSY.

Signed-off-by: Haren Myneni <haren@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/05e45ff4f8babd2490ccb7ae923884f4aa21a7e5.camel@linux.ibm.com
2 years agopowerpc/pseries/vas: Modify reconfig open/close functions for migration
Haren Myneni [Mon, 28 Feb 2022 07:52:08 +0000 (23:52 -0800)]
powerpc/pseries/vas: Modify reconfig open/close functions for migration

VAS is a hardware engine stays on the chip. So when the partition
migrates, all VAS windows on the source system have to be closed
and reopen them on the destination after migration.

The kernel has to consider both DLPAR CPU and migration events to
take action on VAS windows. So using VAS_WIN_NO_CRED_CLOSE and
VAS_WIN_MIGRATE_CLOSE status bits and windows will be reopened
after migration only after both status bits are cleared.

This patch make changes to the current reconfig_open/close_windows
functions to support migration:
- Set VAS_WIN_MIGRATE_CLOSE to the window status when closes and
  reopen windows with the same status during resume.
- Continue to close all windows even if deallocate HCALL failed
  (should not happen) since no way to stop migration with the
  current LPM implementation.
- If the DLPAR CPU event happens while migration is in progress,
  set VAS_WIN_NO_CRED_CLOSE to the window status. Close window
  happens with the first event (migration or DLPAR) and Reopen
  window happens only with the last event (migration or DLPAR).

Signed-off-by: Haren Myneni <haren@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/0aad580387cb58379496b4cbbd7c5596e9ea70be.camel@linux.ibm.com
2 years agopowerpc/pseries/vas: Define global hv_cop_caps struct
Haren Myneni [Mon, 28 Feb 2022 07:51:28 +0000 (23:51 -0800)]
powerpc/pseries/vas: Define global hv_cop_caps struct

The coprocessor capabilities struct is used to get default and
QoS capabilities from the hypervisor during init, DLPAR event and
migration. So instead of allocating this struct for each event,
define global struct and reuse it which allows the migration code
to avoid adding an error path.

Also disable copy/paste feature flag if any capabilities HCALL
is failed.

Signed-off-by: Haren Myneni <haren@linux.ibm.com>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Acked-by: Nathan Lynch <nathanl@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/57da6a270fcb9308cd57be7c88037029343080f7.camel@linux.ibm.com
2 years agopowerpc/pseries/vas: Add 'update_total_credits' entry for QoS capabilities
Haren Myneni [Tue, 1 Mar 2022 01:16:10 +0000 (17:16 -0800)]
powerpc/pseries/vas: Add 'update_total_credits' entry for QoS capabilities

pseries supports two types of credits - Default (uses normal priority
FIFO) and Qality of service (QoS uses high priority FIFO). The user
decides the number of QoS credits and sets this value with HMC
interface. The total credits for QoS capabilities can be changed
dynamically with HMC interface which invokes drmgr to communicate
to the kernel.

This patch creats 'update_total_credits' entry for QoS capabilities
so that drmgr command can write the new target QoS credits in sysfs.
Instead of using this value, the kernel gets the new QoS capabilities
from the hypervisor whenever update_total_credits is updated to make
sure sync with the QoS target credits in the hypervisor.

Signed-off-by: Haren Myneni <haren@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/b01ef31a0f964686d00243e7de7f09c73c07e69e.camel@linux.ibm.com
2 years agopowerpc/pseries/vas: sysfs interface to export capabilities
Haren Myneni [Tue, 1 Mar 2022 01:15:36 +0000 (17:15 -0800)]
powerpc/pseries/vas: sysfs interface to export capabilities

The hypervisor provides the available VAS GZIP capabilities such
as default or QoS window type and the target available credits in
each type. This patch creates sysfs entries and exports the target,
used and the available credits for each feature.

This interface can be used by the user space to determine the credits
usage or to set the target credits in the case of QoS type (for DLPAR).

/sys/devices/vas/vas0/gzip/default_capabilities (default GZIP capabilities)
nr_total_credits /* Total credits available. Can be
 /* changed with DLPAR operation */
nr_used_credits  /* Used credits */

/sys/devices/vas/vas0/gzip/qos_capabilities (QoS GZIP capabilities)
nr_total_credits
nr_used_credits

Signed-off-by: Haren Myneni <haren@linux.ibm.com>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/702d8b626ebfac2b52f4995eebeafe1c9a6fcb75.camel@linux.ibm.com
2 years agopowerpc/pseries/vas: Reopen windows with DLPAR core add
Haren Myneni [Tue, 1 Mar 2022 01:15:04 +0000 (17:15 -0800)]
powerpc/pseries/vas: Reopen windows with DLPAR core add

VAS windows can be closed in the hypervisor due to lost credits
when the core is removed and the kernel gets fault for NX
requests on these inactive windows. If the NX requests are
issued on these inactive windows, OS gets page faults and the
paste failure will be returned to the user space. If the lost
credits are available later with core add, reopen these windows
and set them active. Later when the OS sees page faults on these
active windows, it creates mapping on the new paste address.
Then the user space can continue to use these windows and send
HW compression requests to NX successfully.

Signed-off-by: Haren Myneni <haren@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/d9f360e21355e6826142c81146acfa9b60bc7ecc.camel@linux.ibm.com
2 years agopowerpc/pseries/vas: Close windows with DLPAR core removal
Haren Myneni [Tue, 1 Mar 2022 01:14:28 +0000 (17:14 -0800)]
powerpc/pseries/vas: Close windows with DLPAR core removal

The hypervisor assigns vas credits (windows) for each LPAR based
on the number of cores configured in that system. The OS is
expected to release credits when cores are removed, and may
allocate more when cores are added. So there is a possibility of
using excessive credits (windows) in the LPAR and the hypervisor
expects the system to close the excessive windows so that NX load
can be equally distributed across all LPARs in the system.

When the OS closes the excessive windows in the hypervisor,
it sets the window status inactive and invalidates window
virtual address mapping. The user space receives paste instruction
failure if any NX requests are issued on the inactive window.
Then the user space can use with the available open windows or
retry NX requests until this window active again.

This patch also adds the notifier for core removal/add to close
windows in the hypervisor if the system lost credits (core
removal) and reopen windows in the hypervisor when the previously
lost credits are available.

Signed-off-by: Haren Myneni <haren@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/108928f9c00a48cc6a722315d482d07cf66acf5a.camel@linux.ibm.com
2 years agopowerpc/vas: Map paste address only if window is active
Haren Myneni [Tue, 1 Mar 2022 01:13:54 +0000 (17:13 -0800)]
powerpc/vas: Map paste address only if window is active

The paste address mapping is done with mmap() after the window is
opened with ioctl. The partition has to close VAS windows in the
hypervisor if it lost credits due to DLPAR core removal. But the
kernel marks these windows inactive until the previously lost
credits are available later. If the window is inactive due to
DLPAR after this mmap(), the paste instruction returns failure
until the the OS reopens this window again.

Before the user space issuing mmap(), there is a possibility of
happening DLPAR core removal event which causes the corresponding
window inactive. So if the window is not active, return mmap()
failure with -EACCES and expects the user space reissue mmap()
when the window is active or open a new window when the credit
is available.

Signed-off-by: Haren Myneni <haren@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/bbb203c26b324534e25658cb1dbbcb5160a2f93a.camel@linux.ibm.com
2 years agopowerpc/vas: Return paste instruction failure if no active window
Haren Myneni [Tue, 1 Mar 2022 01:13:15 +0000 (17:13 -0800)]
powerpc/vas: Return paste instruction failure if no active window

The VAS window may not be active if the system looses credits and
the NX generates page fault when it receives request on unmap
paste address.

The kernel handles the fault by remap new paste address if the
window is active again, Otherwise return the paste instruction
failure if the executed instruction that caused the fault was
a paste.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Haren Myneni <haren@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/492b9aefd593061d51dda67ee4d2fc449c000dce.camel@linux.ibm.com
2 years agopowerpc/vas: Add paste address mmap fault handler
Haren Myneni [Tue, 1 Mar 2022 01:12:41 +0000 (17:12 -0800)]
powerpc/vas: Add paste address mmap fault handler

The user space opens VAS windows and issues NX requests by pasting
CRB on the corresponding paste address mmap. When the system lost
credits due to core removal, the kernel has to close the window in
the hypervisor and make the window inactive by unmapping this paste
address. Also the OS has to handle NX request page faults if the user
space issue NX requests.

This handler maps the new paste address with the same VMA when the
window is active again (due to core add with DLPAR). Otherwise
returns paste failure.

Signed-off-by: Haren Myneni <haren@linux.ibm.com>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/3956e1c1fdfde69127055ff1c0256c7d71104030.camel@linux.ibm.com
2 years agopowerpc/pseries/vas: Save PID in pseries_vas_window struct
Haren Myneni [Tue, 1 Mar 2022 01:12:04 +0000 (17:12 -0800)]
powerpc/pseries/vas: Save PID in pseries_vas_window struct

The kernel sets the VAS window with PID when it is opened in
the hypervisor. During DLPAR operation, windows can be closed and
reopened in the hypervisor when the credit is available. So saves
this PID in pseries_vas_window struct when the window is opened
initially and reuse it later during DLPAR operation.

Signed-off-by: Haren Myneni <haren@linux.ibm.com>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/a57cbe6d292fe49ad55a0b49c5679d6a24d8fe73.camel@linux.ibm.com
2 years agopowerpc/pseries/vas: Use common names in VAS capability structure
Haren Myneni [Tue, 1 Mar 2022 01:11:28 +0000 (17:11 -0800)]
powerpc/pseries/vas: Use common names in VAS capability structure

nr_total/nr_used_credits provides credits usage to user space
via sysfs and the same interface can be used on PowerNV in
future. Changed with proper naming so that applicable on both
pseries and PowerNV.

Signed-off-by: Haren Myneni <haren@linux.ibm.com>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/f4313e9f198ee4f8d4fa4d015d8d1873e17851e6.camel@linux.ibm.com
2 years agoMerge branch 'topic/ppc-kvm' into next
Michael Ellerman [Mon, 7 Mar 2022 13:02:29 +0000 (00:02 +1100)]
Merge branch 'topic/ppc-kvm' into next

Merge our topic branch containing powerpc KVM related commits.

Alexey Kardashevskiy (1):
      KVM: PPC: Merge powerpc's debugfs entry content into generic entry

Fabiano Rosas (9):
      KVM: PPC: Book3S HV: Stop returning internal values to userspace
      KVM: PPC: Fix vmx/vsx mixup in mmio emulation
      KVM: PPC: mmio: Reject instructions that access more than mmio.data size
      KVM: PPC: mmio: Return to guest after emulation failure
      KVM: PPC: Book3s: mmio: Deliver DSI after emulation failure
      KVM: PPC: Book3S HV: Check return value of kvmppc_radix_init
      KVM: PPC: Book3S HV: Delay setting of kvm ops
      KVM: PPC: Book3S HV: Free allocated memory if module init fails
      KVM: PPC: Decrement module refcount if init_vm fails

Jason Wang (1):
      powerpc/kvm: no need to initialise statics to 0

Nour-eddine Taleb (1):
      KVM: PPC: Book3S HV: remove unnecessary casts

2 years agoMerge branch 'topic/func-desc-lkdtm' into next
Michael Ellerman [Mon, 7 Mar 2022 12:34:32 +0000 (23:34 +1100)]
Merge branch 'topic/func-desc-lkdtm' into next

Merge a topic branch we are maintaining with some cross-architecture
changes to function descriptor handling and their use in LKDTM.

From Christophe's cover letter:

Fix LKDTM for PPC64/IA64/PARISC

PPC64/IA64/PARISC have function descriptors. LKDTM doesn't work on those
three architectures because LKDTM messes up function descriptors with
functions.

This series does some cleanup in the three architectures and refactors
function descriptors so that it can then easily use it in a generic way
in LKDTM.

2 years agoKVM: PPC: Book3S HV: remove unnecessary casts
Nour-eddine Taleb [Thu, 3 Mar 2022 14:34:16 +0000 (15:34 +0100)]
KVM: PPC: Book3S HV: remove unnecessary casts

Remove unnecessary casts, from "void *" to "struct kvmppc_xics *"

Signed-off-by: Nour-eddine Taleb <kernel.noureddine@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220303143416.201851-1-kernel.noureddine@gmail.com
2 years agopowerpc/lib/sstep: Fix build errors with newer binutils
Anders Roxell [Thu, 24 Feb 2022 16:22:15 +0000 (17:22 +0100)]
powerpc/lib/sstep: Fix build errors with newer binutils

Building tinyconfig with gcc (Debian 11.2.0-16) and assembler (Debian
2.37.90.20220207) the following build error shows up:

  {standard input}: Assembler messages:
  {standard input}:10576: Error: unrecognized opcode: `stbcx.'
  {standard input}:10680: Error: unrecognized opcode: `lharx'
  {standard input}:10694: Error: unrecognized opcode: `lbarx'

Rework to add assembler directives [1] around the instruction.  The
problem with this might be that we can trick a power6 into
single-stepping through an stbcx. for instance, and it will execute that
in kernel mode.

[1] https://sourceware.org/binutils/docs/as/PowerPC_002dPseudo.html#PowerPC_002dPseudo

Fixes: 350779a29f11 ("powerpc: Handle most loads and stores in instruction emulation code")
Cc: stable@vger.kernel.org # v4.14+
Co-developed-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Anders Roxell <anders.roxell@linaro.org>
Reviewed-by: Segher Boessenkool <segher@kernel.crashing.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220224162215.3406642-3-anders.roxell@linaro.org
2 years agopowerpc: Fix build errors with newer binutils
Anders Roxell [Thu, 24 Feb 2022 16:22:14 +0000 (17:22 +0100)]
powerpc: Fix build errors with newer binutils

Building tinyconfig with gcc (Debian 11.2.0-16) and assembler (Debian
2.37.90.20220207) the following build error shows up:

  {standard input}: Assembler messages:
  {standard input}:1190: Error: unrecognized opcode: `stbcix'
  {standard input}:1433: Error: unrecognized opcode: `lwzcix'
  {standard input}:1453: Error: unrecognized opcode: `stbcix'
  {standard input}:1460: Error: unrecognized opcode: `stwcix'
  {standard input}:1596: Error: unrecognized opcode: `stbcix'
  ...

Rework to add assembler directives [1] around the instruction. Going
through them one by one shows that the changes should be safe.  Like
__get_user_atomic_128_aligned() is only called in p9_hmi_special_emu(),
which according to the name is specific to power9.  And __raw_rm_read*()
are only called in things that are powernv or book3s_hv specific.

[1] https://sourceware.org/binutils/docs/as/PowerPC_002dPseudo.html#PowerPC_002dPseudo

Cc: stable@vger.kernel.org
Co-developed-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Anders Roxell <anders.roxell@linaro.org>
Reviewed-by: Segher Boessenkool <segher@kernel.crashing.org>
[mpe: Make commit subject more descriptive]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220224162215.3406642-2-anders.roxell@linaro.org
2 years agopowerpc/lib/sstep: Fix 'sthcx' instruction
Anders Roxell [Thu, 24 Feb 2022 16:22:13 +0000 (17:22 +0100)]
powerpc/lib/sstep: Fix 'sthcx' instruction

Looks like there been a copy paste mistake when added the instruction
'stbcx' twice and one was probably meant to be 'sthcx'. Changing to
'sthcx' from 'stbcx'.

Fixes: 350779a29f11 ("powerpc: Handle most loads and stores in instruction emulation code")
Cc: stable@vger.kernel.org # v4.14+
Reported-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Anders Roxell <anders.roxell@linaro.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220224162215.3406642-1-anders.roxell@linaro.org
2 years agopowerpc/Makefile: Don't pass -mcpu=powerpc64 when building 32-bit
Michael Ellerman [Tue, 15 Feb 2022 11:28:58 +0000 (22:28 +1100)]
powerpc/Makefile: Don't pass -mcpu=powerpc64 when building 32-bit

When CONFIG_GENERIC_CPU=y (true for all our defconfigs) we pass
-mcpu=powerpc64 to the compiler, even when we're building a 32-bit
kernel.

This happens because we have an ifdef CONFIG_PPC_BOOK3S_64/else block in
the Makefile that was written before 32-bit supported GENERIC_CPU. Prior
to that the else block only applied to 64-bit Book3E.

The GCC man page says -mcpu=powerpc64 "[specifies] a pure ... 64-bit big
endian PowerPC ... architecture machine [type], with an appropriate,
generic processor model assumed for scheduling purposes."

It's unclear how that interacts with -m32, which we are also passing,
although obviously -m32 is taking precedence in some sense, as the
32-bit kernel only contains 32-bit instructions.

This was noticed by inspection, not via any bug reports, but it does
affect code generation. Comparing before/after code generation, there
are some changes to instruction scheduling, and the after case (with
-mcpu=powerpc64 removed) the compiler seems more keen to use r8.

Fix it by making the else case only apply to Book3E 64, which excludes
32-bit.

Fixes: 0e00a8c9fd92 ("powerpc: Allow CPU selection also on PPC32")
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220215112858.304779-1-mpe@ellerman.id.au
2 years agopowerpc/mm/numa: skip NUMA_NO_NODE onlining in parse_numa_properties()
Daniel Henrique Barboza [Thu, 24 Feb 2022 18:23:12 +0000 (15:23 -0300)]
powerpc/mm/numa: skip NUMA_NO_NODE onlining in parse_numa_properties()

Executing node_set_online() when nid = NUMA_NO_NODE results in an
undefined behavior. node_set_online() will call node_set_state(), into
__node_set(), into set_bit(), and since NUMA_NO_NODE is -1 we'll end up
doing a negative shift operation inside
arch/powerpc/include/asm/bitops.h. This potential UB was detected
running a kernel with CONFIG_UBSAN.

The behavior was introduced by commit 10f78fd0dabb ("powerpc/numa: Fix a
regression on memoryless node 0"), where the check for nid > 0 was
removed to fix a problem that was happening with nid = 0, but the result
is that now we're trying to online NUMA_NO_NODE nids as well.

Checking for nid >= 0 will allow node 0 to be onlined while avoiding
this UB with NUMA_NO_NODE.

Fixes: 10f78fd0dabb ("powerpc/numa: Fix a regression on memoryless node 0")
Reported-by: Ping Fang <pifang@redhat.com>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220224182312.1012527-1-danielhb413@gmail.com
2 years agopowerpc/interrupt: Remove struct interrupt_state
Christophe Leroy [Fri, 25 Feb 2022 16:36:22 +0000 (17:36 +0100)]
powerpc/interrupt: Remove struct interrupt_state

Since commit ceff77efa4f8 ("powerpc/64e/interrupt: Use new interrupt
context tracking scheme") struct interrupt_state has been empty and
unused.

Remove it.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/1d862ce3eab3da6ca7ac47d4a78a18f154462511.1645806970.git.christophe.leroy@csgroup.eu
2 years agopowerpc/fadump: register for fadump as early as possible
Hari Bathini [Tue, 1 Feb 2022 10:53:05 +0000 (16:23 +0530)]
powerpc/fadump: register for fadump as early as possible

Crash recovery (fadump) is setup in the userspace by some service. This
service rebuilds initrd with dump capture capability, if it is not
already dump capture capable before proceeding to register for firmware
assisted dump (echo 1 > /sys/kernel/fadump/registered). But arming the
kernel with crash recovery support does not have to wait for userspace
configuration. So, register for fadump while setting it up itself. This
can at worst lead to a scenario, where /proc/vmcore is ready afer crash
but the initrd does not know how/where to offload it, which is always
better than not having a /proc/vmcore at all due to incomplete
configuration in the userspace at the time of crash.

Commit 0823c68b054b ("powerpc/fadump: re-register firmware-assisted dump
if already registered") ensures this change does not break userspace.

Signed-off-by: Hari Bathini <hbathini@linux.ibm.com>
[mpe: Reword comment]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220201105305.155511-1-hbathini@linux.ibm.com
2 years agoselftests/powerpc/pmu: Add interface test for mmcra register fields
Kajol Jain [Thu, 27 Jan 2022 07:20:12 +0000 (12:50 +0530)]
selftests/powerpc/pmu: Add interface test for mmcra register fields

The testcase uses event code 0x35340401e0 to verify the settings for
different fields in Monitor Mode Control Register A (MMCRA). The fields
include thresh_start, thresh_stop thresh_select, sdar mode, sample and
marked bit. Checks if these fields are translated correctly via perf
interface to MMCRA.

Signed-off-by: Kajol Jain <kjain@linux.ibm.com>
[mpe: Add error checking, drop GET_MMCR_FIELD, add to .gitignore]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220127072012.662451-21-kjain@linux.ibm.com
2 years agoselftests/powerpc/pmu/: Add interface test for mmcr3_src fields
Kajol Jain [Thu, 27 Jan 2022 07:20:11 +0000 (12:50 +0530)]
selftests/powerpc/pmu/: Add interface test for mmcr3_src fields

The testcase uses event code 0x1340000001c040 to verify the settings for
different src fields in Monitor Mode Control Register 3 (MMCR3). Checks
if these fields are translated correctly via perf interface to MMCR3 on
ISA v3.1 platform.

Signed-off-by: Kajol Jain <kjain@linux.ibm.com>
[mpe: Add error checking, drop GET_MMCR_FIELD, add to .gitignore]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220127072012.662451-20-kjain@linux.ibm.com
2 years agoselftests/powerpc/pmu/: Add interface test for mmcr2_fcs_fch fields
Madhavan Srinivasan [Thu, 27 Jan 2022 07:20:10 +0000 (12:50 +0530)]
selftests/powerpc/pmu/: Add interface test for mmcr2_fcs_fch fields

The testcases uses cycles event to verify the freeze counter settings in
Monitor Mode Control Register 2 (MMCR2). Event modifier (exclude_kernel)
setting is used for the event attribute to check the FCxS and FCxH (
Freeze counter in privileged and hypervisor state ) settings via perf
interface.

Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com>
[mpe: Add error checking, check MSR for MSR_HV, drop GET_MMCR_FIELD, add to .gitignore]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220127072012.662451-19-kjain@linux.ibm.com
2 years agoselftests/powerpc/pmu/: Add interface test for mmcr2_l2l3 field
Madhavan Srinivasan [Thu, 27 Jan 2022 07:20:09 +0000 (12:50 +0530)]
selftests/powerpc/pmu/: Add interface test for mmcr2_l2l3 field

The testcases uses event code 0x010000046080 to verify the l2l3 bit
setting for Monitor Mode Control Register 2 (MMCR2). check if this bit
is set correctly via perf interface in ISA v3.1 platform.

Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com>
[mpe: Add error checking, drop GET_MMCR_FIELD, add to .gitignore]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220127072012.662451-18-kjain@linux.ibm.com
2 years agoselftests/powerpc/pmu/: Add interface test for mmcr1_comb field
Athira Rajeev [Thu, 27 Jan 2022 07:20:07 +0000 (12:50 +0530)]
selftests/powerpc/pmu/: Add interface test for mmcr1_comb field

The testcase uses event code "0x26880" to verify the settings for
different fields in Monitor Mode Control Register 1 (MMCR1). The field
include PMCxCOMB. Checks if this field are translated correctly via perf
interface to MMCR1

Add selftest for mmcr1 comb field.

Signed-off-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com>
[mpe: Add error checking, drop GET_MMCR_FIELD, add to .gitignore]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220127072012.662451-16-kjain@linux.ibm.com
2 years agoselftests/powerpc/pmu/: Add interface test for mmcr0_pmc56 using pmc5
Athira Rajeev [Thu, 27 Jan 2022 07:20:06 +0000 (12:50 +0530)]
selftests/powerpc/pmu/: Add interface test for mmcr0_pmc56 using pmc5

The testcase uses event code 0x500fa to verify the FC5-6 bit setting in
Monitor Mode Control Register 0 (MMCR0). Check if FC5-6 bit is not set
in MMCR0 when using Performance Monitor Counter 5 and 6 (PMC5 and PMC6).

Signed-off-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com>
[mpe: Add error checking, drop GET_MMCR_FIELD, add to .gitignore]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220127072012.662451-15-kjain@linux.ibm.com
2 years agoselftests/powerpc/pmu/: Add interface test for mmcr0_fc56 field using pmc1
Athira Rajeev [Thu, 27 Jan 2022 07:20:05 +0000 (12:50 +0530)]
selftests/powerpc/pmu/: Add interface test for mmcr0_fc56 field using pmc1

The testcase uses event code 0x1001e to verify two bit settings (FC5-6
and PMC1CE) in Monitor Mode Control Register 0 (MMCR0). Check if FC5-6
bit to be set in MMCR0 when not using Performance Monitor Counter 5 and
6 (PMC5 and PMC6). And also PMC1CE is expected to be set when using
PMC1. Test if these fields are programmed correctly via perf interface.

Signed-off-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com>
[mpe: Add error checking, drop GET_MMCR_FIELD, add to .gitignore]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220127072012.662451-14-kjain@linux.ibm.com
2 years agoselftests/powerpc/pmu/: Add interface test for mmcr0_pmcjce field
Athira Rajeev [Thu, 27 Jan 2022 07:20:04 +0000 (12:50 +0530)]
selftests/powerpc/pmu/: Add interface test for mmcr0_pmcjce field

The testcase uses event code 0x500fa ("instructions") to verify the
PMCjCE bit setting in Monitor Mode Control Register 0 (MMCR0). This bit
is expected to be set in MMCR0 when using Performance Monitor Counter
5 (PMC5). Checks if perf interface sets this bit correctly.

Signed-off-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com>
[mpe: Add error checking, drop GET_MMCR_FIELD, add to .gitignore]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220127072012.662451-13-kjain@linux.ibm.com
2 years agoselftests/powerpc/pmu/: Add interface test for mmcr0_pmccext bit
Athira Rajeev [Thu, 27 Jan 2022 07:20:03 +0000 (12:50 +0530)]
selftests/powerpc/pmu/: Add interface test for mmcr0_pmccext bit

The testcase uses cycles event to check the PMCCEXT bit setting in
Monitor Mode Control Register 0 (MMCR0). Check if perf interface sets
this control bit in MMCR0 on ISA v3.1 platform.

Signed-off-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com>
[mpe: Add error checking, drop GET_MMCR_FIELD, add to .gitignore]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220127072012.662451-12-kjain@linux.ibm.com
2 years agoselftests/powerpc/pmu/: Add interface test for mmcr0_cc56run field
Athira Rajeev [Thu, 27 Jan 2022 07:20:02 +0000 (12:50 +0530)]
selftests/powerpc/pmu/: Add interface test for mmcr0_cc56run field

The testcase uses event code 0x500fa ("instructions") to check the
CC56RUN bit setting in Monitor Mode Control Register 0(MMCR0). In ISA
v3.1 platform, this bit is expected to be set in MMCR0 when using
Performance Monitor Counter 5 and 6 (PMC5 and PMC6). Verify this is done
correctly by perf interface.

CC56RUN bit makes PMC5 and PMC6 count regardless of the run latch state.
This bit is set in power10 since PMC5 and PMC6 is used in power10 for
counting instructions and cycles. Hence added a check to skip this test
in other platforms

Signed-off-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com>
[mpe: Add error checking, drop GET_MMCR_FIELD, add to .gitignore]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220127072012.662451-11-kjain@linux.ibm.com
2 years agoselftests/powerpc/pmu/: Add interface test for mmcr0 exception bits
Athira Rajeev [Thu, 27 Jan 2022 07:20:01 +0000 (12:50 +0530)]
selftests/powerpc/pmu/: Add interface test for mmcr0 exception bits

The testcase uses "instructions" event to verify two bits(PMAE and PMAO)
in Monitor Mode Control Register 0 (MMCR0). At the time of interrupt,
pmae bit ( which enables performance monitor exception ) is expected to
be cleared and pmao (which indicates performance monitor alert) bit is
expected to be set in MMCR0. And testcases handles these checks.

Signed-off-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com>
[mpe: Add error checking, drop GET_MMCR_FIELD, add to .gitignore]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220127072012.662451-10-kjain@linux.ibm.com
2 years agoselftests/powerpc/pmu: Add macro to extract mmcr3 and mmcra fields
Kajol Jain [Thu, 27 Jan 2022 07:20:00 +0000 (12:50 +0530)]
selftests/powerpc/pmu: Add macro to extract mmcr3 and mmcra fields

Add macro and utility functions to fetch individual fields from Monitor
Mode Control Register 3(MMCR3)and Monitor Mode Control Register A(MMCRA)
PMU registers

Signed-off-by: Kajol Jain <kjain@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220127072012.662451-9-kjain@linux.ibm.com
2 years agoselftests/powerpc/pmu: Add macro to extract mmcr0/mmcr1 fields
Athira Rajeev [Thu, 27 Jan 2022 07:19:59 +0000 (12:49 +0530)]
selftests/powerpc/pmu: Add macro to extract mmcr0/mmcr1 fields

Add macro and utility functions to fetch individual fields from Monitor
Mode Control Register 0(MMCR0) and Monitor Mode Control Register
1(MMCR1) PMU register.

Signed-off-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220127072012.662451-8-kjain@linux.ibm.com
2 years agoselftests/powerpc/pmu: Add macros to extract mmcr fields
Madhavan Srinivasan [Thu, 27 Jan 2022 07:19:58 +0000 (12:49 +0530)]
selftests/powerpc/pmu: Add macros to extract mmcr fields

Along with it, Add macros and utility functions to fetch individual
fields from Monitor Mode Control Register 2(MMCR2) register.

Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220127072012.662451-7-kjain@linux.ibm.com
2 years agoselftests/powerpc/pmu: Add event_init_sampling function
Madhavan Srinivasan [Thu, 27 Jan 2022 07:19:57 +0000 (12:49 +0530)]
selftests/powerpc/pmu: Add event_init_sampling function

Extended event_init_opts() to include initialization of sampling
testcases. Patch adds an event_init_sampling() wrapper to initialize
event attribute fields for sampling events. This includes initializing
sample period, sample type and event type.

Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220127072012.662451-6-kjain@linux.ibm.com
2 years agoselftests/powerpc/pmu: Add utility functions to post process the mmap buffer
Kajol Jain [Thu, 27 Jan 2022 07:19:56 +0000 (12:49 +0530)]
selftests/powerpc/pmu: Add utility functions to post process the mmap buffer

Add couple of basic utility functions to post process the mmap buffer.
It includes function to read the total number of samples present in the
mmap buffer and function to get the address of the first sample.

Add function "get_intr_regs" which will return pointer to interrupt
registers present in the sample, incase sample type
PERF_SAMPLE_REGS_INTR is set.

Add functions "get_reg_value" which can be used to read any interrupt
register value from a given sample.

Signed-off-by: Kajol Jain <kjain@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220127072012.662451-5-kjain@linux.ibm.com
2 years agoselftests/powerpc/pmu: Add macros to parse event codes
Madhavan Srinivasan [Thu, 27 Jan 2022 07:19:55 +0000 (12:49 +0530)]
selftests/powerpc/pmu: Add macros to parse event codes

Each platform has raw event encoding format which specifies the bit
positions for different fields. The fields from event code gets
translated into performance monitoring mode control register (MMCRx)
settings. Patch add macros to extract individual fields from the event
code.

Add functions for sanity checks, since testcases currently are only
supported in power9 and power10.

Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com>
[mpe: Read PVR directly rather than using /proc/cpuinfo]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220127072012.662451-4-kjain@linux.ibm.com
2 years agoselftests/powerpc/pmu: Add support for perf sampling tests
Athira Rajeev [Thu, 27 Jan 2022 07:19:54 +0000 (12:49 +0530)]
selftests/powerpc/pmu: Add support for perf sampling tests

Add support functions for enabling perf sampling test in a new folder
"sampling_tests" under "selftests/powerpc/pmu". This includes support
functions for allocating and processing the mmap buffer. These functions
are added/defined in "sampling_tests/misc.*" files.

Also updates the corresponding Makefiles in "selftests/powerpc" and
"sampling_tests" folder.

Signed-off-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com>
[mpe: Drop unneeded bits from the Makefile]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220127072012.662451-3-kjain@linux.ibm.com
2 years agoselftests/powerpc/pmu: Include mmap_buffer field as part of struct event
Athira Rajeev [Thu, 27 Jan 2022 07:19:53 +0000 (12:49 +0530)]
selftests/powerpc/pmu: Include mmap_buffer field as part of struct event

To enable the capturing of samples as part of perf event, add a new
field "mmap_buffer" to "struct event". This field is a place-holder for
sample collection

Signed-off-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220127072012.662451-2-kjain@linux.ibm.com
2 years agopowerpc/module_64: fix array_size.cocci warning
Guo Zhengkui [Wed, 23 Feb 2022 07:54:23 +0000 (15:54 +0800)]
powerpc/module_64: fix array_size.cocci warning

Fix following coccicheck warning:
./arch/powerpc/kernel/module_64.c:432:40-41: WARNING: Use ARRAY_SIZE.

ARRAY_SIZE(arr) is a macro provided by the kernel. It makes sure that arr
is an array, so it's safer than sizeof(arr) / sizeof(arr[0]) and more
standard.

Signed-off-by: Guo Zhengkui <guozhengkui@vivo.com>
Reviewed-by: Russell Currey <ruscur@russell.cc>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220223075426.20939-1-guozhengkui@vivo.com
2 years agopowerpc/64s/hash: Make hash faults work in NMI context
Nicholas Piggin [Fri, 4 Feb 2022 03:53:48 +0000 (13:53 +1000)]
powerpc/64s/hash: Make hash faults work in NMI context

Hash faults are not resoved in NMI context, instead causing the access
to fail. This is done because perf interrupts can get backtraces
including walking the user stack, and taking a hash fault on those could
deadlock on the HPTE lock if the perf interrupt hits while the same HPTE
lock is being held by the hash fault code. The user-access for the stack
walking will notice the access failed and deal with that in the perf
code.

The reason to allow perf interrupts in is to better profile hash faults.

The problem with this is any hash fault on a kernel access that happens
in NMI context will crash, because kernel accesses must not fail.

Hard lockups, system reset, machine checks that access vmalloc space
including modules and including stack backtracing and symbol lookup in
modules, per-cpu data, etc could all run into this problem.

Fix this by disallowing perf interrupts in the hash fault code (the
direct hash fault is covered by MSR[EE]=0 so the PMI disable just needs
to extend to the preload case). This simplifies the tricky logic in hash
faults and perf, at the cost of reduced profiling of hash faults.

perf can still latch addresses when interrupts are disabled, it just
won't get the stack trace at that point, so it would still find hot
spots, just sometimes with confusing stack chains.

An alternative could be to allow perf interrupts here but always do the
slowpath stack walk if we are in nmi context, but that slows down all
perf interrupt stack walking on hash though and it does not remove as
much tricky code.

Reported-by: Laurent Dufour <ldufour@linux.ibm.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Tested-by: Laurent Dufour <ldufour@linux.ibm.com>
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220204035348.545435-1-npiggin@gmail.com
2 years agopowerpc: Remove remaining stab codes
Christophe Leroy [Tue, 22 Feb 2022 14:05:30 +0000 (15:05 +0100)]
powerpc: Remove remaining stab codes

Following commit 12318163737c ("powerpc/32: Remove remaining .stabs
annotations"), stabs code are not used anymore.

Remove them.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/d8b33342d7454f6ca4f368f5206896558dfa06f4.1645538722.git.christophe.leroy@csgroup.eu
2 years agolkdtm: Add a test for function descriptors protection
Christophe Leroy [Tue, 15 Feb 2022 12:41:08 +0000 (13:41 +0100)]
lkdtm: Add a test for function descriptors protection

Add WRITE_OPD to check that you can't modify function
descriptors.

Gives the following result when function descriptors are
not protected:

lkdtm: Performing direct entry WRITE_OPD
lkdtm: attempting bad 16 bytes write at c00000000269b358
lkdtm: FAIL: survived bad write
lkdtm: do_nothing was hijacked!

Looks like a standard compiler barrier() is not enough to force
GCC to use the modified function descriptor. Had to add a fake empty
inline assembly to force GCC to reload the function descriptor.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Acked-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/7eeba50d16a35e9d799820e43304150225f20197.1644928018.git.christophe.leroy@csgroup.eu
2 years agolkdtm: Fix execute_[user]_location()
Christophe Leroy [Tue, 15 Feb 2022 12:41:07 +0000 (13:41 +0100)]
lkdtm: Fix execute_[user]_location()

execute_location() and execute_user_location() intent
to copy do_nothing() text and execute it at a new location.
However, at the time being it doesn't copy do_nothing() function
but do_nothing() function descriptor which still points to the
original text. So at the end it still executes do_nothing() at
its original location allthough using a copied function descriptor.

So, fix that by really copying do_nothing() text and build a new
function descriptor by copying do_nothing() function descriptor and
updating the target address with the new location.

Also fix the displayed addresses by dereferencing do_nothing()
function descriptor.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Acked-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/4055839683d8d643cd99be121f4767c7c611b970.1644928018.git.christophe.leroy@csgroup.eu
2 years agolkdtm: Really write into kernel text in WRITE_KERN
Christophe Leroy [Tue, 15 Feb 2022 12:41:06 +0000 (13:41 +0100)]
lkdtm: Really write into kernel text in WRITE_KERN

WRITE_KERN is supposed to overwrite some kernel text, namely
do_overwritten() function.

But at the time being it overwrites do_overwritten() function
descriptor, not function text.

Fix it by dereferencing the function descriptor to obtain
function text pointer. Export dereference_function_descriptor()
for when LKDTM is built as a module.

And make do_overwritten() noinline so that it is really
do_overwritten() which is called by lkdtm_WRITE_KERN().

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Acked-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/31e58eaffb5bc51c07d8d4891d1982100ade8cfc.1644928018.git.christophe.leroy@csgroup.eu
2 years agolkdtm: Force do_nothing() out of line
Christophe Leroy [Tue, 15 Feb 2022 12:41:05 +0000 (13:41 +0100)]
lkdtm: Force do_nothing() out of line

LKDTM tests display that the run do_nothing() at a given
address, but in reality do_nothing() is inlined into the
caller.

Force it out of line so that it really runs text at the
displayed address.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Acked-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/a5dcf4d2088e6aca47ab3b4c6d5c0f7fa064e25a.1644928018.git.christophe.leroy@csgroup.eu
2 years agoasm-generic: Refactor dereference_[kernel]_function_descriptor()
Christophe Leroy [Tue, 15 Feb 2022 12:41:04 +0000 (13:41 +0100)]
asm-generic: Refactor dereference_[kernel]_function_descriptor()

dereference_function_descriptor() and
dereference_kernel_function_descriptor() are identical on the
three architectures implementing them.

Make them common and put them out-of-line in kernel/extable.c
which is one of the users and has similar type of functions.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Reviewed-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Helge Deller <deller@gmx.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/449db09b2eba57f4ab05f80102a67d8675bc8bcd.1644928018.git.christophe.leroy@csgroup.eu
2 years agoasm-generic: Define 'func_desc_t' to commonly describe function descriptors
Christophe Leroy [Tue, 15 Feb 2022 12:41:03 +0000 (13:41 +0100)]
asm-generic: Define 'func_desc_t' to commonly describe function descriptors

We have three architectures using function descriptors, each with its
own type and name.

Add a common typedef that can be used in generic code.

Also add a stub typedef for architecture without function descriptors,
to avoid a forest of #ifdefs.

It replaces the similar 'func_desc_t' previously defined in
arch/powerpc/kernel/module_64.c

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Reviewed-by: Kees Cook <keescook@chromium.org>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Helge Deller <deller@gmx.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/f1f91b142b3c1082bdc1586ce71c9bac1e75213c.1644928018.git.christophe.leroy@csgroup.eu
2 years agoasm-generic: Define CONFIG_HAVE_FUNCTION_DESCRIPTORS
Christophe Leroy [Tue, 15 Feb 2022 12:41:02 +0000 (13:41 +0100)]
asm-generic: Define CONFIG_HAVE_FUNCTION_DESCRIPTORS

Replace HAVE_DEREFERENCE_FUNCTION_DESCRIPTOR by a config option
named CONFIG_HAVE_FUNCTION_DESCRIPTORS and use it instead of
'dereference_function_descriptor' macro to know whether an
arch has function descriptors.

To limit churn in one of the following patches, use
an #ifdef/#else construct with empty first part
instead of an #ifndef in asm-generic/sections.h

On powerpc, make sure the config option matches the ABI used
by the compiler with a BUILD_BUG_ON() and add missing _CALL_ELF=2
when calling 'sparse' so that sparse sees the same piece of
code as GCC.

And include a helper to check whether an arch has function
descriptors or not : have_function_descriptors()

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Reviewed-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Acked-by: Helge Deller <deller@gmx.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/4a0f11fb0ea74a3197bc44dd7ba25e53a24fd03d.1644928018.git.christophe.leroy@csgroup.eu
2 years agoia64: Rename 'ip' to 'addr' in 'struct fdesc'
Christophe Leroy [Tue, 15 Feb 2022 12:41:01 +0000 (13:41 +0100)]
ia64: Rename 'ip' to 'addr' in 'struct fdesc'

There are three architectures with function descriptors, try to
have common names for the address they contain in order to
refactor some functions into generic functions later.

powerpc has 'entry'
ia64 has 'ip'
parisc has 'addr'

Vote for 'addr' and update 'struct fdesc' accordingly.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Reviewed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/65b73ac614e4c002c5819d40b42f6f426d2ee52b.1644928018.git.christophe.leroy@csgroup.eu
2 years agopowerpc: Prepare func_desc_t for refactorisation
Christophe Leroy [Tue, 15 Feb 2022 12:41:00 +0000 (13:41 +0100)]
powerpc: Prepare func_desc_t for refactorisation

In preparation of making func_desc_t generic, change the ELFv2
version to a struct containing 'addr' element.

This allows using single helpers common to ELFv1 and ELFv2 and
reduces the amount of #ifdef's

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Reviewed-by: Kees Cook <keescook@chromium.org>
Acked-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/5c36105e08b27b98450535bff48d71b690c19739.1644928018.git.christophe.leroy@csgroup.eu
2 years agopowerpc: Remove 'struct ppc64_opd_entry'
Christophe Leroy [Tue, 15 Feb 2022 12:40:59 +0000 (13:40 +0100)]
powerpc: Remove 'struct ppc64_opd_entry'

'struct ppc64_opd_entry' doesn't belong to uapi/asm/elf.h

It was initially in module_64.c and commit 2d291e902791 ("Fix compile
failure with non modular builds") moved it into asm/elf.h

But it was by mistake added outside of __KERNEL__ section,
therefore commit c3617f72036c ("UAPI: (Scripted) Disintegrate
arch/powerpc/include/asm") moved it to uapi/asm/elf.h

Now that it is not used anymore by the kernel, remove it.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Reviewed-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/c309ccee65ec2e3802df7a7fe761d0a298584809.1644928018.git.christophe.leroy@csgroup.eu
2 years agopowerpc: Use 'struct func_desc' instead of 'struct ppc64_opd_entry'
Christophe Leroy [Tue, 15 Feb 2022 12:40:58 +0000 (13:40 +0100)]
powerpc: Use 'struct func_desc' instead of 'struct ppc64_opd_entry'

'struct ppc64_opd_entry' is somehow redundant with 'struct func_desc',
the later is more correct/complete as it includes the third
field which is unused.

So use 'struct func_desc' instead of 'struct ppc64_opd_entry'

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Reviewed-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Daniel Axtens <dja@axtens.net>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/34e76bac6cbe95a63ecd37df69fb7feb93b0ea7c.1644928018.git.christophe.leroy@csgroup.eu
2 years agopowerpc: Move and rename func_descr_t
Christophe Leroy [Tue, 15 Feb 2022 12:40:57 +0000 (13:40 +0100)]
powerpc: Move and rename func_descr_t

There are three architectures with function descriptors, try to
have common names for the address they contain in order to
refactor some functions into generic functions later.

powerpc has 'entry'
ia64 has 'ip'
parisc has 'addr'

Vote for 'addr' and update 'func_descr_t' accordingly.

Move it in asm/elf.h to have it at the same place on all
three architectures, remove the typedef which hides its real
type, and change it to a smoother name 'struct func_desc'.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/529b2ba1d001e8f628ef0d30e8044c9b3d0a4921.1644928018.git.christophe.leroy@csgroup.eu
2 years agopowerpc: Fix 'sparse' checking on PPC64le
Christophe Leroy [Tue, 15 Feb 2022 12:40:56 +0000 (13:40 +0100)]
powerpc: Fix 'sparse' checking on PPC64le

'sparse' is architecture agnostic and knows nothing about ELF ABI
version.

Just like it gets arch and powerpc type and endian from Makefile,
it also need to get _CALL_ELF from there, otherwise it won't set
PPC64_ELF_ABI_v2 macro for PPC64le and won't check the correct code.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Reviewed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/ac1312f2451aa558bb2a8806b4d0aa2020f0c176.1644928018.git.christophe.leroy@csgroup.eu
2 years agopowerpc/papr_scm: Implement initial support for injecting smart errors
Vaibhav Jain [Mon, 24 Jan 2022 20:22:04 +0000 (01:52 +0530)]
powerpc/papr_scm: Implement initial support for injecting smart errors

Presently PAPR doesn't support injecting smart errors on an
NVDIMM. This makes testing the NVDIMM health reporting functionality
difficult as simulating NVDIMM health related events need a hacked up
qemu version.

To solve this problem this patch proposes simulating certain set of
NVDIMM health related events in papr_scm. Specifically 'fatal' health
state and 'dirty' shutdown state. These error can be injected via the
user-space 'ndctl-inject-smart(1)' command. With the proposed patch and
corresponding ndctl patches following command flow is expected:

$ sudo ndctl list -DH -d nmem0
...
      "health_state":"ok",
      "shutdown_state":"clean",
...
 # inject unsafe shutdown and fatal health error
$ sudo ndctl inject-smart nmem0 -Uf
...
      "health_state":"fatal",
      "shutdown_state":"dirty",
...
 # uninject all errors
$ sudo ndctl inject-smart nmem0 -N
...
      "health_state":"ok",
      "shutdown_state":"clean",
...

The patch adds a new member 'health_bitmap_inject_mask' inside struct
papr_scm_priv which is then bitwise ANDed to the health bitmap fetched from the
hypervisor. The value for 'health_bitmap_inject_mask' is accessible from sysfs
at nmemX/papr/health_bitmap_inject.

A new PDSM named 'SMART_INJECT' is proposed that accepts newly
introduced 'struct nd_papr_pdsm_smart_inject' as payload thats
exchanged between libndctl and papr_scm to indicate the requested
smart-error states.

When the processing the PDSM 'SMART_INJECT', papr_pdsm_smart_inject()
constructs a pair or 'inject_mask' and 'clear_mask' bitmaps from the payload
and bit-blt it to the 'health_bitmap_inject_mask'. This ensures the after being
fetched from the hypervisor, the health_bitmap reflects requested smart-error
states.

Signed-off-by: Vaibhav Jain <vaibhav@linux.ibm.com>
Signed-off-by: Shivaprasad G Bhat <sbhat@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220124202204.1488346-1-vaibhav@linux.ibm.com
2 years agopowerpc/ftrace: Style cleanup in ftrace_mprofile.S
Christophe Leroy [Tue, 15 Feb 2022 18:31:25 +0000 (19:31 +0100)]
powerpc/ftrace: Style cleanup in ftrace_mprofile.S

Add some line breaks to better match the file's style, add
some space after comma and fix a couple of misplaced blanks.

Suggested-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Reviewed-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/973506292d0c7b05c06530c8e11803ce38e5eda2.1644949750.git.christophe.leroy@csgroup.eu
2 years agopowerpc/ftrace: Have arch_ftrace_get_regs() return NULL unless FL_SAVE_REGS is set
Christophe Leroy [Tue, 15 Feb 2022 18:31:24 +0000 (19:31 +0100)]
powerpc/ftrace: Have arch_ftrace_get_regs() return NULL unless FL_SAVE_REGS is set

When FL_SAVE_REGS is not set we get here via ftrace_caller()
which doesn't save all registers.

ftrace_caller() explicitely clears regs.msr, so we can rely
on it to know where we come from. We don't expect MSR register
to be 0 at all when involving ftrace.

Fixes: 40b035efe288 ("powerpc/ftrace: Implement CONFIG_DYNAMIC_FTRACE_WITH_ARGS")
Reported-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/2f9a7e898c93cc7438ef5ccd47cb9c3a9c5b53ef.1644949750.git.christophe.leroy@csgroup.eu
2 years agopowerpc/ftrace: Add recursion protection in prepare_ftrace_return()
Christophe Leroy [Tue, 15 Feb 2022 18:31:23 +0000 (19:31 +0100)]
powerpc/ftrace: Add recursion protection in prepare_ftrace_return()

The function_graph_enter() does not provide any recursion protection.

Add a protection in prepare_ftrace_return() in case
function_graph_enter() calls something that gets
function graph traced.

Fixes: 830213786c49 ("powerpc/ftrace: directly call of function graph tracer by ftrace caller")
Reported-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/74edf2ff0a60e66b0d9225a137100a86a0557032.1644949750.git.christophe.leroy@csgroup.eu
2 years agopowerpc/ftrace: Also save r1 in ftrace_caller()
Christophe Leroy [Tue, 15 Feb 2022 18:31:22 +0000 (19:31 +0100)]
powerpc/ftrace: Also save r1 in ftrace_caller()

Also save r1 in ftrace_caller()

r1 is needed during unwinding when the function_graph tracer
is active.

Fixes: 830213786c49 ("powerpc/ftrace: directly call of function graph tracer by ftrace caller")
Reported-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/ff535e86d3a69376a6d89168511d4e403835f18b.1644949750.git.christophe.leroy@csgroup.eu
2 years agopowerpc/boot: Add `otheros-too-big.bld` to .gitignore
Paul Menzel [Mon, 14 Feb 2022 06:55:43 +0000 (07:55 +0100)]
powerpc/boot: Add `otheros-too-big.bld` to .gitignore

Currently, `git status` lists the file as untracked by git, so tell git
to ignore it.

Fixes: aa3bc365ee73 ("powerpc/ps3: Add check for otheros image size")
Signed-off-by: Paul Menzel <pmenzel@molgen.mpg.de>
Acked-by: Geoff Levand <geoff@infradead.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220214065543.198992-1-pmenzel@molgen.mpg.de
2 years agopowerpc: Don't allow the use of EMIT_BUG_ENTRY with BUGFLAG_WARNING
Christophe Leroy [Sun, 13 Feb 2022 09:02:41 +0000 (10:02 +0100)]
powerpc: Don't allow the use of EMIT_BUG_ENTRY with BUGFLAG_WARNING

Warnings in assembly must use EMIT_WARN_ENTRY in order to generate
the necessary entry in exception table.

Check in EMIT_BUG_ENTRY that flags don't include BUGFLAG_WARNING.

This change avoids problems like the one fixed by
commit fd1eaaaaa686 ("powerpc/64s: Use EMIT_WARN_ENTRY for SRR debug
warnings").

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/ddcb422102a37eb45f57694c7ef0ec6187964dff.1644742951.git.christophe.leroy@csgroup.eu
2 years agopowerpc: Fix STACKTRACE=n build
Michael Ellerman [Fri, 11 Feb 2022 06:32:37 +0000 (17:32 +1100)]
powerpc: Fix STACKTRACE=n build

Our skiroot_defconfig doesn't enable FTRACE, and so doesn't get
STACKTRACE enabled either. That leads to a build failure since commit
1614b2b11fab ("arch: Make ARCH_STACKWALK independent of STACKTRACE")
made stacktrace.c build even when STACKTRACE=n.

  arch/powerpc/kernel/stacktrace.c: In function ‘handle_backtrace_ipi’:
  arch/powerpc/kernel/stacktrace.c:171:2: error: implicit declaration of function ‘nmi_cpu_backtrace’
    171 |  nmi_cpu_backtrace(regs);
        |  ^~~~~~~~~~~~~~~~~
  arch/powerpc/kernel/stacktrace.c: In function ‘arch_trigger_cpumask_backtrace’:
  arch/powerpc/kernel/stacktrace.c:226:2: error: implicit declaration of function ‘nmi_trigger_cpumask_backtrace’
    226 |  nmi_trigger_cpumask_backtrace(mask, exclude_self, raise_backtrace_ipi);
        |  ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~

This happens because our headers haven't defined
arch_trigger_cpumask_backtrace, which causes lib/nmi_backtrace.c not to
build nmi_cpu_backtrace().

The code in question doesn't actually depend on STACKTRACE=y, that was
just added because arch_trigger_cpumask_backtrace() lived in
stacktrace.c for convenience. So drop the dependency on
CONFIG_STACKTRACE, that causes lib/nmi_backtrace.c to build
nmi_cpu_backtrace() etc. and fixes the build.

Fixes: 1614b2b11fab ("arch: Make ARCH_STACKWALK independent of STACKTRACE")
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220212111349.2806972-1-mpe@ellerman.id.au
2 years agopowerpc/mm: Update default hugetlb size early
Aneesh Kumar K.V [Fri, 11 Feb 2022 06:52:15 +0000 (12:22 +0530)]
powerpc/mm: Update default hugetlb size early

commit: d9c234005227 ("Do not depend on MAX_ORDER when grouping pages by mobility")
introduced pageblock_order which will be used to group pages better.
The kernel now groups pages based on the value of HPAGE_SHIFT. Hence HPAGE_SHIFT
should be set before we call set_pageblock_order.

set_pageblock_order happens early in the boot and default hugetlb page size
should be initialized before that to compute the right pageblock_order value.

Currently, default hugetlbe page size is set via arch_initcalls which happens
late in the boot as shown via the below callstack:

[c000000007383b10] [c000000001289328] hugetlbpage_init+0x2b8/0x2f8
[c000000007383bc0] [c0000000012749e4] do_one_initcall+0x14c/0x320
[c000000007383c90] [c00000000127505c] kernel_init_freeable+0x410/0x4e8
[c000000007383da0] [c000000000012664] kernel_init+0x30/0x15c
[c000000007383e10] [c00000000000cf14] ret_from_kernel_thread+0x5c/0x64

and the pageblock_order initialization is done early during the boot.

[c0000000018bfc80] [c0000000012ae120] set_pageblock_order+0x50/0x64
[c0000000018bfca0] [c0000000012b3d94] sparse_init+0x188/0x268
[c0000000018bfd60] [c000000001288bfc] initmem_init+0x28c/0x328
[c0000000018bfe50] [c00000000127b370] setup_arch+0x410/0x480
[c0000000018bfed0] [c00000000127401c] start_kernel+0xb8/0x934
[c0000000018bff90] [c00000000000d984] start_here_common+0x1c/0x98

delaying default hugetlb page size initialization implies the kernel will
initialize pageblock_order to (MAX_ORDER - 1) which is not an optimal
value for mobility grouping. IIUC we always had this issue. But it was not
a problem for hash translation mode because (MAX_ORDER - 1) is the same as
HUGETLB_PAGE_ORDER (8) in the case of hash (16MB). With radix,
HUGETLB_PAGE_ORDER will be 5 (2M size) and hence pageblock_order should be
5 instead of 8.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220211065215.101767-1-aneesh.kumar@linux.ibm.com
2 years agoselftests/powerpc/copyloops: Add memmove_64 test
Ritesh Harjani [Mon, 13 Sep 2021 06:17:20 +0000 (11:47 +0530)]
selftests/powerpc/copyloops: Add memmove_64 test

While debugging an issue, we wanted to check whether the arch specific
kernel memmove implementation is correct.
This selftest could help test that.

Suggested-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Suggested-by: Vaibhav Jain <vaibhav@linux.ibm.com>
Signed-off-by: Ritesh Harjani <riteshh@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/57242c1fe7aba6b7f0fcd0490303bfd5f222ee00.1631512686.git.riteshh@linux.ibm.com
2 years agopowerpc/pseries: make pseries_devicetree_update() static
Nathan Lynch [Mon, 7 Feb 2022 22:12:47 +0000 (16:12 -0600)]
powerpc/pseries: make pseries_devicetree_update() static

pseries_devicetree_update() has only one call site, in the same file in
which it is defined. Make it static.

Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220207221247.354454-1-nathanl@linux.ibm.com
2 years agopowerpc/vdso: Move cvdso_call macro into gettimeofday.S
Christophe Leroy [Fri, 21 Jan 2022 16:30:34 +0000 (16:30 +0000)]
powerpc/vdso: Move cvdso_call macro into gettimeofday.S

Now that gettimeofday.S is unique, move cvdso_call macro
into that file which is the only user.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/72720359d4c58e3a3b96dd74952741225faac3de.1642782130.git.christophe.leroy@csgroup.eu
2 years agopowerpc/vdso: Remove cvdso_call_time macro
Christophe Leroy [Fri, 21 Jan 2022 16:30:30 +0000 (16:30 +0000)]
powerpc/vdso: Remove cvdso_call_time macro

cvdso_call_time macro is very similar to cvdso_call macro.

Add a call_time argument to cvdso_call which is 0 by default
and set to 1 when using cvdso_call to call __c_kernel_time().

Return returned value as is with CR[SO] cleared when it is used
for time().

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/837a260ad86fc1ce297a562c2117fd69be5f7b5c.1642782130.git.christophe.leroy@csgroup.eu
2 years agopowerpc/vdso: Merge vdso64 and vdso32 into a single directory
Christophe Leroy [Fri, 21 Jan 2022 16:30:27 +0000 (16:30 +0000)]
powerpc/vdso: Merge vdso64 and vdso32 into a single directory

merge vdso64 into vdso32 and rename it vdso.

Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/4dbe05cc130f6a0858d09ac72e436c373cb08b70.1642782130.git.christophe.leroy@csgroup.eu
2 years agopowerpc/vdso: Rework VDSO32 makefile to add a prefix to object files
Christophe Leroy [Fri, 21 Jan 2022 16:30:23 +0000 (16:30 +0000)]
powerpc/vdso: Rework VDSO32 makefile to add a prefix to object files

In order to merge vdso32 and vdso64 build in following patch, rework
Makefile is order to add -32 suffix to VDSO32 object files.

Also change sigtramp.S to sigtramp32.S as VDSO64 sigtramp.S is too
different to be squashed into VDSO32 sigtramp.S at the first place.

gen_vdso_offsets.sh also becomes gen_vdso32_offsets.sh

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/0c421b704a57b228e75a891512568339c53667ad.1642782130.git.christophe.leroy@csgroup.eu
2 years agopowerpc/vdso: augment VDSO32 functions to support 64 bits build
Christophe Leroy [Fri, 21 Jan 2022 16:30:21 +0000 (16:30 +0000)]
powerpc/vdso: augment VDSO32 functions to support 64 bits build

VDSO64 cacheflush.S datapage.S gettimeofday.S and vgettimeofday.c
are very similar to their VDSO32 counterpart.

VDSO32 counterpart is already more complete than the VDSO64 version
as it supports both PPC32 vdso and 32 bits VDSO for PPC64.

Use compat macros wherever necessary in PPC32 files
so that they can also be used to build VDSO64.

vdso64/note.S is already a link to vdso32/note.S so
no change is required.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/c2cbb8f046b7efc251053521dc39b752795e26b7.1642782130.git.christophe.leroy@csgroup.eu
2 years agopowerpc/lib/sstep: use truncate_if_32bit()
Christophe Leroy [Fri, 21 Jan 2022 08:06:38 +0000 (08:06 +0000)]
powerpc/lib/sstep: use truncate_if_32bit()

Use truncate_if_32bit() when possible instead of open coding.

truncate_if_32bit() returns an unsigned long, so don't use it when
a signed value is expected.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/7e1c07123f13156d4a27991a2e2694fb584bc068.1642752375.git.christophe.leroy@csgroup.eu
2 years agopowerpc/lib/sstep: Remove unneeded #ifdef __powerpc64__
Christophe Leroy [Fri, 21 Jan 2022 08:06:32 +0000 (08:06 +0000)]
powerpc/lib/sstep: Remove unneeded #ifdef __powerpc64__

MSR_64BIT is always defined, no need to hide code using MSR_64BIT
inside an #ifdef __powerpc64__

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/ee61b693bc7e046eed1abb7a34909eb4878a9442.1642752375.git.christophe.leroy@csgroup.eu
2 years agopowerpc/lib/sstep: Use l1_dcache_bytes() instead of opencoding
Christophe Leroy [Fri, 21 Jan 2022 08:06:27 +0000 (08:06 +0000)]
powerpc/lib/sstep: Use l1_dcache_bytes() instead of opencoding

Don't opencode dcache size retrieval based on whether that's ppc32 or ppc64.

Use l1_dcache_bytes()

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/6c608fd4795e2d8ea1a0a449405a0087f76d8bb3.1642752375.git.christophe.leroy@csgroup.eu
2 years agopowerpc: Use the newly added is_tsk_32bit_task() macro
Christophe Leroy [Fri, 21 Jan 2022 07:58:47 +0000 (07:58 +0000)]
powerpc: Use the newly added is_tsk_32bit_task() macro

Two places deserve using the macro is_tsk_32bit_task() added by
commit 252745240ba0 ("powerpc/audit: Fix syscall_get_arch()")

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/7304a889dbe885aefad8a8333673c81ee4b8f7a6.1642751874.git.christophe.leroy@csgroup.eu
2 years agopowerpc/32s: Enable STRICT_MODULE_RWX for the 603 core
Christophe Leroy [Mon, 17 Jan 2022 10:06:39 +0000 (10:06 +0000)]
powerpc/32s: Enable STRICT_MODULE_RWX for the 603 core

The book3s/32 MMU doesn't support per page execution protection and
doesn't support RO protection for kernel pages.

However, on the 603 which implements software loaded TLBs, execution
protection is honored by the TLB Miss handler which doesn't load
Instruction TLB for non executable pages. And RO protection is
honored by clearing the C bit for RO pages, leading to DSI.

So on the 603, STRICT_MODULE_RWX is possible without much effort.
Don't disable STRICT_MODULE_RWX on book3s/32 and print a warning
in case STRICT_MODULE_RWX has been selected and the platform has
a Hardware HASH MMU.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/1e6162f334167e75f1140082932e3a354b16daba.1642413973.git.christophe.leroy@csgroup.eu
2 years agopowerpc/bpf: Always reallocate BPF_REG_5, BPF_REG_AX and TMP_REG when possible
Christophe Leroy [Mon, 10 Jan 2022 12:29:42 +0000 (12:29 +0000)]
powerpc/bpf: Always reallocate BPF_REG_5, BPF_REG_AX and TMP_REG when possible

BPF_REG_5, BPF_REG_AX and TMP_REG are mapped on non volatile registers
because there are not enough volatile registers, but they don't need
to be preserved on function calls.

So when some volatile registers become available, those registers can
always be reallocated regardless of whether SEEN_FUNC is set or not.

Suggested-by: Naveen N. Rao <naveen.n.rao@linux.ibm.com>
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Reviewed-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/b04c246874b716911139c04bc004b3b14eed07ef.1641817763.git.christophe.leroy@csgroup.eu
2 years agopowerpc: Add set_memory_{p/np}() and remove set_memory_attr()
Christophe Leroy [Fri, 24 Dec 2021 11:07:40 +0000 (11:07 +0000)]
powerpc: Add set_memory_{p/np}() and remove set_memory_attr()

set_memory_attr() was implemented by commit 4d1755b6a762 ("powerpc/mm:
implement set_memory_attr()") because the set_memory_xx() couldn't
be used at that time to modify memory "on the fly" as explained it
the commit.

But set_memory_attr() uses set_pte_at() which leads to warnings when
CONFIG_DEBUG_VM is selected, because set_pte_at() is unexpected for
updating existing page table entries.

The check could be bypassed by using __set_pte_at() instead,
as it was the case before commit c988cfd38e48 ("powerpc/32:
use set_memory_attr()") but since commit 9f7853d7609d ("powerpc/mm:
Fix set_memory_*() against concurrent accesses") it is now possible
to use set_memory_xx() functions to update page table entries
"on the fly" because the update is now atomic.

For DEBUG_PAGEALLOC we need to clear and set back _PAGE_PRESENT.
Add set_memory_np() and set_memory_p() for that.

Replace all uses of set_memory_attr() by the relevant set_memory_xx()
and remove set_memory_attr().

Fixes: c988cfd38e48 ("powerpc/32: use set_memory_attr()")
Cc: stable@vger.kernel.org
Reported-by: Maxime Bizon <mbizon@freebox.fr>
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Tested-by: Maxime Bizon <mbizon@freebox.fr>
Reviewed-by: Russell Currey <ruscur@russell.cc>
Depends-on: 9f7853d7609d ("powerpc/mm: Fix set_memory_*() against concurrent accesses")
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/cda2b44b55c96f9ac69fa92e68c01084ec9495c5.1640344012.git.christophe.leroy@csgroup.eu
2 years agopowerpc/set_memory: Avoid spinlock recursion in change_page_attr()
Christophe Leroy [Fri, 24 Dec 2021 11:07:33 +0000 (11:07 +0000)]
powerpc/set_memory: Avoid spinlock recursion in change_page_attr()

Commit 1f9ad21c3b38 ("powerpc/mm: Implement set_memory() routines")
included a spin_lock() to change_page_attr() in order to
safely perform the three step operations. But then
commit 9f7853d7609d ("powerpc/mm: Fix set_memory_*() against
concurrent accesses") modify it to use pte_update() and do
the operation safely against concurrent access.

In the meantime, Maxime reported some spinlock recursion.

[   15.351649] BUG: spinlock recursion on CPU#0, kworker/0:2/217
[   15.357540]  lock: init_mm+0x3c/0x420, .magic: dead4ead, .owner: kworker/0:2/217, .owner_cpu: 0
[   15.366563] CPU: 0 PID: 217 Comm: kworker/0:2 Not tainted 5.15.0+ #523
[   15.373350] Workqueue: events do_free_init
[   15.377615] Call Trace:
[   15.380232] [e4105ac0] [800946a4] do_raw_spin_lock+0xf8/0x120 (unreliable)
[   15.387340] [e4105ae0] [8001f4ec] change_page_attr+0x40/0x1d4
[   15.393413] [e4105b10] [801424e0] __apply_to_page_range+0x164/0x310
[   15.400009] [e4105b60] [80169620] free_pcp_prepare+0x1e4/0x4a0
[   15.406045] [e4105ba0] [8016c5a0] free_unref_page+0x40/0x2b8
[   15.411979] [e4105be0] [8018724c] kasan_depopulate_vmalloc_pte+0x6c/0x94
[   15.418989] [e4105c00] [801424e0] __apply_to_page_range+0x164/0x310
[   15.425451] [e4105c50] [80187834] kasan_release_vmalloc+0xbc/0x134
[   15.431898] [e4105c70] [8015f7a8] __purge_vmap_area_lazy+0x4e4/0xdd8
[   15.438560] [e4105d30] [80160d10] _vm_unmap_aliases.part.0+0x17c/0x24c
[   15.445283] [e4105d60] [801642d0] __vunmap+0x2f0/0x5c8
[   15.450684] [e4105db0] [800e32d0] do_free_init+0x68/0x94
[   15.456181] [e4105dd0] [8005d094] process_one_work+0x4bc/0x7b8
[   15.462283] [e4105e90] [8005d614] worker_thread+0x284/0x6e8
[   15.468227] [e4105f00] [8006aaec] kthread+0x1f0/0x210
[   15.473489] [e4105f40] [80017148] ret_from_kernel_thread+0x14/0x1c

Remove the read / modify / write sequence to make the operation atomic
and remove the spin_lock() in change_page_attr().

To do the operation atomically, we can't use pte modification helpers
anymore. Because all platforms have different combination of bits, it
is not easy to use those bits directly. But all have the
_PAGE_KERNEL_{RO/ROX/RW/RWX} set of flags. All we need it to compare
two sets to know which bits are set or cleared.

For instance, by comparing _PAGE_KERNEL_ROX and _PAGE_KERNEL_RO you
know which bit gets cleared and which bit get set when changing exec
permission.

Reported-by: Maxime Bizon <mbizon@freebox.fr>
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/all/20211212112152.GA27070@sakura/
Link: https://lore.kernel.org/r/43c3c76a1175ae6dc1a3d3b5c3f7ecb48f683eea.1640344012.git.christophe.leroy@csgroup.eu
2 years agopowerpc/ftrace: Remove ftrace_32.S
Christophe Leroy [Mon, 20 Dec 2021 16:38:44 +0000 (16:38 +0000)]
powerpc/ftrace: Remove ftrace_32.S

Functions in ftrace_32.S are common with PPC64.

Reuse the ones defined for PPC64 with slight modification
when required.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
[mpe: Squash in fixup diff from Christophe]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/5e837fc190504c4ef834272e70d60ae33f175d49.1640017960.git.christophe.leroy@csgroup.eu
2 years agopowerpc/ftrace: Prepare ftrace_64_mprofile.S for reuse by PPC32
Christophe Leroy [Mon, 20 Dec 2021 16:38:40 +0000 (16:38 +0000)]
powerpc/ftrace: Prepare ftrace_64_mprofile.S for reuse by PPC32

PPC64 mprofile versions and PPC32 are very similar.

Modify PPC64 version so that if can be reused for PPC32.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/82a732915dc71ee766e31809350939331944006d.1640017960.git.christophe.leroy@csgroup.eu
2 years agopowerpc/ftrace: directly call of function graph tracer by ftrace caller
Christophe Leroy [Mon, 20 Dec 2021 16:38:35 +0000 (16:38 +0000)]
powerpc/ftrace: directly call of function graph tracer by ftrace caller

Modify function graph tracer to be handled directly by the standard
ftrace caller.

This is made possible as powerpc now supports
CONFIG_DYNAMIC_FTRACE_WITH_ARGS.

This change simplifies the call of function graph ftrace.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/04d196585ff81bde06a000bd9c633a33a5b21130.1640017960.git.christophe.leroy@csgroup.eu
2 years agopowerpc/ftrace: Refactor ftrace_{en/dis}able_ftrace_graph_caller
Christophe Leroy [Mon, 20 Dec 2021 16:38:31 +0000 (16:38 +0000)]
powerpc/ftrace: Refactor ftrace_{en/dis}able_ftrace_graph_caller

ftrace_enable_ftrace_graph_caller() and
ftrace_disable_ftrace_graph_caller() have common code.

They will have even more common code after following patch.

Refactor into a single ftrace_modify_ftrace_graph_caller() function.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/f37785a531f1a8f201e1b3da45997a5c77e9d820.1640017960.git.christophe.leroy@csgroup.eu
2 years agopowerpc/ftrace: Implement CONFIG_DYNAMIC_FTRACE_WITH_ARGS
Christophe Leroy [Mon, 20 Dec 2021 16:38:28 +0000 (16:38 +0000)]
powerpc/ftrace: Implement CONFIG_DYNAMIC_FTRACE_WITH_ARGS

Implement CONFIG_DYNAMIC_FTRACE_WITH_ARGS. It accelerates the call
of livepatching.

Also note that powerpc being the last one to convert to
CONFIG_DYNAMIC_FTRACE_WITH_ARGS, it will now be possible to remove
klp_arch_set_pc() on all architectures.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/5831f711a778fcd6eb51eb5898f1faae4378b35b.1640017960.git.christophe.leroy@csgroup.eu
2 years agopowerpc/ftrace: Prepare PPC64's ftrace_caller() for CONFIG_DYNAMIC_FTRACE_WITH_ARGS
Christophe Leroy [Mon, 20 Dec 2021 16:38:25 +0000 (16:38 +0000)]
powerpc/ftrace: Prepare PPC64's ftrace_caller() for CONFIG_DYNAMIC_FTRACE_WITH_ARGS

In order to implement CONFIG_DYNAMIC_FTRACE_WITH_ARGS, change ftrace_caller()
to handle LIVEPATCH the same way as frace_caller_regs().

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/850817333cc76593699032e8e9a70d8c36e1af1e.1640017960.git.christophe.leroy@csgroup.eu