Christophe Leroy [Thu, 7 Jul 2022 14:55:15 +0000 (16:55 +0200)]
powerpc/ppc-opcode: Define and use PPC_RAW_TRAP() and PPC_RAW_TW()
Add and use PPC_RAW_TRAP() instead of opencoding.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/52c7e522e56a38e3ff0363906919445920005a8f.1657205708.git.christophe.leroy@csgroup.eu
Christophe Leroy [Thu, 7 Jul 2022 14:55:14 +0000 (16:55 +0200)]
powerpc/probes: Remove ppc_opcode_t
ppc_opcode_t is just an u32. There is no point in hiding u32
behind such a typedef. Remove it.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/b2d762191b095530789ac8b71b167c6740bb6aed.1657205708.git.christophe.leroy@csgroup.eu
Christophe Leroy [Thu, 7 Jul 2022 14:37:18 +0000 (16:37 +0200)]
powerpc: Remove remaining parts of oprofile
Commit
9850b6c69356 ("arch: powerpc: Remove oprofile") removed
oprofile.
Remove all remaining parts of it.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/298432fe1a14c0a415760011d72c3f0999efd5e2.1657204631.git.christophe.leroy@csgroup.eu
Rashmica Gupta [Thu, 7 Jul 2022 14:37:17 +0000 (16:37 +0200)]
powerpc/perf: Use PVR rather than oprofile field to determine CPU version
Currently the perf CPU backend drivers detect what CPU they're on using
cur_cpu_spec->oprofile_cpu_type.
Although that works, it's a bit crufty to be using oprofile related fields,
especially seeing as oprofile is more or less unused these days.
It also means perf is reliant on the fragile logic in setup_cpu_spec()
which detects when we're using a logical PVR and copies back the PMU
related fields from the raw CPU entry. So lets check the PVR directly.
Suggested-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Rashmica Gupta <rashmica.g@gmail.com>
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Reviewed-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
[chleroy: Added power10 and fixed checkpatch issues]
Reviewed-and-tested-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com>
Reviewed-and-tested-By: Kajol Jain <kjain@linux.ibm.com> [For 24x7 side changes]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20c0ee7f99dbf0dbf8658df6b39f84753e6db1ef.1657204631.git.christophe.leroy@csgroup.eu
Christophe Leroy [Fri, 1 Jul 2022 06:06:15 +0000 (08:06 +0200)]
powerpc/32s: Fix boot failure with KASAN + SMP + JUMP_LABEL_FEATURE_CHECK_DEBUG
Since commit
4291d085b0b0 ("powerpc/32s: Make pte_update() non
atomic on 603 core"), pte_update() has been using
mmu_has_feature(MMU_FTR_HPTE_TABLE) to avoid a useless atomic
operation on 603 cores.
When kasan_early_init() sets up the early zero shadow, it uses
__set_pte_at(). On book3s/32, __set_pte_at() calls pte_update()
when CONFIG_SMP is selected in order to ensure the preservation of
_PAGE_HASHPTE in case of concurrent update of the PTE. But that's
too early for mmu_has_feature(), so when
CONFIG_JUMP_LABEL_FEATURE_CHECK_DEBUG is selected, mmu_has_feature()
calls printk(). That's too early to call printk() because KASAN
early zero shadow page is not set up yet. It leads to a deadlock.
However, when kasan_early_init() is called, there is only one CPU
running and no risk of concurrent PTE update. So __set_pte_at() can
be called with the 'percpu' flag. With that flag set, the PTE is
written directly instead of being written via pte_update().
Fixes:
4291d085b0b0 ("powerpc/32s: Make pte_update() non atomic on 603 core")
Reported-by: Erhard Furtner <erhard_f@mailbox.org>
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/2ee707512b8b212b079b877f4ceb525a1606a3fb.1656655567.git.christophe.leroy@csgroup.eu
Christophe Leroy [Tue, 14 Jun 2022 10:34:09 +0000 (12:34 +0200)]
powerpc/32: Set an IBAT covering up to _einittext during init
Always set an IBAT covering up to _einittext during init because when
CONFIG_MODULES is not selected there is no reason to have an exception
handler for kernel instruction TLB misses.
It implies DBAT and IBAT are now totaly independent, IBATs are set
by setibat() and DBAT by setbat().
This allows to revert commit
9bb162fa26ed ("powerpc/603: Fix
boot failure with DEBUG_PAGEALLOC and KFENCE")
Reported-by: Maxime Bizon <mbizon@freebox.fr>
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/ce7f04a39593934d9b1ee68c69144ccd3d4da4a1.1655202804.git.christophe.leroy@csgroup.eu
Christophe Leroy [Tue, 14 Jun 2022 10:34:08 +0000 (12:34 +0200)]
powerpc/32: Call mmu_mark_initmem_nx() regardless of data block mapping.
mark_initmem_nx() calls either mmu_mark_initmem_nx() or
set_memory_attr() based on return from v_block_mapped()
of _sinittext.
But we can now handle text and data independently, so that
text may be mapped by block even when data is mapped by pages.
On the 8xx for instance, at startup 32Mbytes of memory are
pinned in TLB. So the pinned entries need to go away for sinittext.
In next patch a BAT will be set to also covers sinittext on book3s/32.
So it will also be needed to call mmu_mark_initmem_nx() even when
data above sinittext is not mapped with BATs.
As this is highly dependent on the platform, call mmu_mark_initmem_nx()
regardless of data block mapping. Then the platform will know what to
do.
Modify 8xx mmu_mark_initmem_nx() so that inittext mapping is modified
only when pagealloc debug and kfence are not active, otherwise inittext
is mapped with standard pages. And don't do anything on kernel text
which is already mapped with PAGE_KERNEL_TEXT.
Fixes:
da1adea07576 ("powerpc/8xx: Allow STRICT_KERNEL_RwX with pinned TLB")
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/db3fc14f3bfa6215b0786ef58a6e2bc1e1f964d7.1655202804.git.christophe.leroy@csgroup.eu
Nicholas Piggin [Mon, 11 Jul 2022 03:06:52 +0000 (13:06 +1000)]
powerpc/mce: use early_cpu_to_node() in mce_init()
cpu_to_node() is not yet available (setup_arch() is called before
setup_per_cpu_areas() by start_kernel()).
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220711030653.150950-1-npiggin@gmail.com
Nicholas Piggin [Wed, 25 May 2022 02:23:58 +0000 (12:23 +1000)]
powerpc/64s: Remove spurious fault flushing for NMMU
Commit
6d8278c414cb2 ("powerpc/64s/radix: do not flush TLB on spurious
fault") removed the TLB flush for spurious faults, except when a
coprocessor (nest MMU) maps the address space. This is not needed
because the NMMU workaround in the PTE permission upgrade paths
prevents PTEs existing with less restrictive access permissions than
their corresponding TLB entries have.
Remove it and replace with a comment.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220525022358.780745-4-npiggin@gmail.com
Nicholas Piggin [Wed, 25 May 2022 02:23:57 +0000 (12:23 +1000)]
powerpc/64s: POWER10 nest MMU can upgrade PTE access authority without TLB flush
The nest MMU in POWER9 does not re-fetch the PTE in response to
permission mismatch, contrary to the architecture[*] and unlike the core
MMU. This requires a TLB flush before upgrading permissions of valid
PTEs, for any address space with a coprocessor attached.
Per (non-public) Nest MMU Workbook, POWER10 nest MMU conforms to the
architecture in this regard, so skip the workaround.
[*] See: Power ISA Version 3.1B, 6.10.1.2 Modifying a Translation Table
Entry, Setting a Reference or Change Bit or Upgrading Access
Authority (PTE Subject to Atomic Hardware Updates):
"If the only change being made to a valid PTE that is subject to
atomic hardware updates is to set the Reference or Change bit to
1 or to upgrade access authority, a simpler sequence suffices
because the translation hardware will refetch the PTE if an
access is attempted for which the only problems were reference
and/or change bits needing to be set or insufficient access
authority."
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220525022358.780745-3-npiggin@gmail.com
Nicholas Piggin [Wed, 25 May 2022 02:23:56 +0000 (12:23 +1000)]
powerpc/64s: POWER10 nest MMU does not require flush escalation workaround
Per (non-public) Nest MMU Workbook, POWER10 and POWER9P NMMU does not
cache PTEs in PWC, so does not require PWC flush to invalidate these
translations.
Skip the workaround on POWER10 and later.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220525022358.780745-2-npiggin@gmail.com
Nicholas Piggin [Fri, 15 Jul 2022 01:26:36 +0000 (11:26 +1000)]
powerpc: add documentation for HWCAPs
Take the arm64 HWCAP documentation file and adjust it for powerpc.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Segher Boessenkool <segher@kernel.crashing.org>
[mpe: Fix ARCH_2_05 comment, as noticed by Tulio.]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220715012636.165948-1-npiggin@gmail.com
Nicholas Piggin [Fri, 20 May 2022 12:36:49 +0000 (22:36 +1000)]
powerpc/vdso: Fix __kernel_sync_dicache sequence with coherent icache
Processors with coherent icache require the sequence sync ; icbi ; isync
to entire store->execute coherency. icbi (to any address) must be
executed to ensure isync flushes the pipeline. See "POWER9 Processor
User's Manual, 4.6.2.2 Instruction Cache Block Invalidate (icbi)" for
details.
__kernel_sync_dicache is missing icbi for the coherent icache path.
Add it.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220520123649.258440-1-npiggin@gmail.com
Pali Rohár [Wed, 6 Jul 2022 10:43:08 +0000 (12:43 +0200)]
powerpc/pci: Add config option for using all 256 PCI buses
By default on PPC32 PCI bus numbers are unique across all PCI domains.
So a system could have only 256 PCI buses independently of available PCI
domains.
This is due to filling DT property pci-OF-bus-map which does not support
a multi-domain setup.
On all powerpc platforms except chrp and powermac there is no DT
property pci-OF-bus-map anymore and therefore it is possible on
non-chrp/powermac platforms to avoid this limitation of maximum number
of 256 PCI buses in a system even on multi-domain setup.
But avoiding this limitation would mean that all PCI and PCIe devices
would be present on completely different BDF addresses as every PCI
domain starts numbering PCI bueses from zero (instead of the last bus
number of previous enumerated PCI domain). Such change could break
existing software which expects fixed PCI bus numbers.
So add a new config option CONFIG_PPC_PCI_BUS_NUM_DOMAIN_DEPENDENT which
enables this change. By default it is disabled. It causes the initial
value of hose->first_busno to be zero.
Signed-off-by: Pali Rohár <pali@kernel.org>
[mpe: Minor change log wording]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220706104308.5390-6-pali@kernel.org
Pali Rohár [Wed, 6 Jul 2022 10:43:07 +0000 (12:43 +0200)]
powerpc/pci: Disable filling pci-OF-bus-map for non-chrp/powermac
Creating or filling pci-OF-bus-map property in the device-tree is
deprecated since May 2006 [1] and was used only in old platforms like
PowerMac.
Currently kernel code handles it only for chrp and powermac code. So
completely disable filling pci-OF-bus-map property for non-chrp and
non-powermac platforms.
[1] - https://lore.kernel.org/linuxppc-dev/
1148016268.13249.14.camel@localhost.localdomain/
Signed-off-by: Pali Rohár <pali@kernel.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220706104308.5390-5-pali@kernel.org
Pali Rohár [Wed, 6 Jul 2022 10:43:06 +0000 (12:43 +0200)]
powerpc/pci: Hide pci_create_OF_bus_map() for non-chrp code
Function pci_create_OF_bus_map() is used only in chrp code.
So hide it from all other platforms.
Signed-off-by: Pali Rohár <pali@kernel.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220706104308.5390-4-pali@kernel.org
Pali Rohár [Wed, 6 Jul 2022 10:43:05 +0000 (12:43 +0200)]
powerpc/pci: Make pcibios_make_OF_bus_map() static
Function pcibios_make_OF_bus_map() is used only in pci_32.c. So make it
static.
Signed-off-by: Pali Rohár <pali@kernel.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220706104308.5390-3-pali@kernel.org
Pali Rohár [Wed, 6 Jul 2022 10:43:04 +0000 (12:43 +0200)]
powerpc/pci: Hide pci_device_from_OF_node() for non-powermac code
Function pci_device_from_OF_node() is used only in powermac code. So
hide it from all other platforms.
Signed-off-by: Pali Rohár <pali@kernel.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220706104308.5390-2-pali@kernel.org
Pali Rohár [Wed, 13 Jul 2022 13:44:29 +0000 (15:44 +0200)]
powerpc: dts: turris1x.dts: Add CPLD reboot node
CPLD firmware can reset board by writing value 0x01 at CPLD memory offset
0x0d. Define syscon-reboot node for this reset support.
Fixes:
54c15ec3b738 ("powerpc: dts: Add DTS file for CZ.NIC Turris 1.x routers")
Signed-off-by: Pali Rohár <pali@kernel.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220713134429.18748-1-pali@kernel.org
Pali Rohár [Wed, 6 Jul 2022 10:10:43 +0000 (12:10 +0200)]
powerpc/fsl-pci: Fix Class Code of PCIe Root Port
By default old pre-3.0 Freescale PCIe controllers reports invalid PCI Class
Code 0x0b20 for PCIe Root Port. It can be seen by lspci -b output on P2020
board which has this pre-3.0 controller:
$ lspci -bvnn
00:00.0 Power PC [0b20]: Freescale Semiconductor Inc P2020E [1957:0070] (rev 21)
!!! Invalid class 0b20 for header type 01
Capabilities: [4c] Express Root Port (Slot-), MSI 00
Fix this issue by programming correct PCI Class Code 0x0604 for PCIe Root
Port to the Freescale specific PCIe register 0x474.
With this change lspci -b output is:
$ lspci -bvnn
00:00.0 PCI bridge [0604]: Freescale Semiconductor Inc P2020E [1957:0070] (rev 21) (prog-if 00 [Normal decode])
Capabilities: [4c] Express Root Port (Slot-), MSI 00
Without any "Invalid class" error. So class code was properly reflected
into standard (read-only) PCI register 0x08.
Same fix is already implemented in U-Boot pcie_fsl.c driver in commit:
http://source.denx.de/u-boot/u-boot/-/commit/
d18d06ac35229345a0af80977a408cfbe1d1015b
Fix activated by U-Boot stay active also after booting Linux kernel.
But boards which use older U-Boot version without that fix are affected and
still require this fix.
So implement this class code fix also in kernel fsl_pci.c driver.
Cc: stable@vger.kernel.org
Signed-off-by: Pali Rohár <pali@kernel.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220706101043.4867-1-pali@kernel.org
Masahiro Yamada [Mon, 25 Jul 2022 01:56:19 +0000 (10:56 +0900)]
powerpc/purgatory: Omit use of bin2c
The .incbin assembler directive is much faster than bin2c + $(CC).
Do similar refactoring as in commit
4c0f032d4963 ("s390/purgatory:
Omit use of bin2c").
Please note the .quad directive matches to size_t in C (both 8 byte)
because the purgatory is compiled only for the 64-bit kernel.
(KEXEC_FILE depends on PPC64).
Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
Reviewed-by: Segher Boessenkool <segher@kernel.crashing.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220725015619.618070-1-masahiroy@kernel.org
Laurent Dufour [Wed, 13 Jul 2022 15:47:29 +0000 (17:47 +0200)]
powerpc/pseries/mobility: set NMI watchdog factor during an LPM
During an LPM, while the memory transfer is in progress on the arrival
side, some latencies are generated when accessing not yet transferred
pages on the arrival side. Thus, the NMI watchdog may be triggered too
frequently, which increases the risk to hit an NMI interrupt in a bad
place in the kernel, leading to a kernel panic.
Disabling the Hard Lockup Watchdog until the memory transfer could be a
too strong work around, some users would want this timeout to be
eventually triggered if the system is hanging even during an LPM.
Introduce a new sysctl variable nmi_watchdog_factor. It allows to apply
a factor to the NMI watchdog timeout during an LPM. Just before the CPUs
are stopped for the switchover sequence, the NMI watchdog timer is set
to watchdog_thresh + factor%
A value of 0 has no effect. The default value is 200, meaning that the
NMI watchdog is set to 30s during LPM (based on a 10s watchdog_thresh
value). Once the memory transfer is achieved, the factor is reset to 0.
Setting this value to a high number is like disabling the NMI watchdog
during an LPM.
Signed-off-by: Laurent Dufour <ldufour@linux.ibm.com>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220713154729.80789-5-ldufour@linux.ibm.com
Laurent Dufour [Wed, 13 Jul 2022 15:47:28 +0000 (17:47 +0200)]
powerpc/watchdog: introduce a NMI watchdog's factor
Introduce a factor which would apply to the NMI watchdog timeout.
This factor is a percentage added to the watchdog_tresh value. The value is
set under the watchdog_mutex protection and lockup_detector_reconfigure()
is called to recompute wd_panic_timeout_tb.
Once the factor is set, it remains until it is set back to 0, which means
no impact.
Signed-off-by: Laurent Dufour <ldufour@linux.ibm.com>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220713154729.80789-4-ldufour@linux.ibm.com
Laurent Dufour [Wed, 13 Jul 2022 15:47:27 +0000 (17:47 +0200)]
watchdog: export lockup_detector_reconfigure
In some circumstances it may be interesting to reconfigure the watchdog
from inside the kernel.
On PowerPC, this may helpful before and after a LPAR migration (LPM) is
initiated, because it implies some latencies, watchdog, and especially NMI
watchdog is expected to be triggered during this operation. Reconfiguring
the watchdog with a factor, would prevent it to happen too frequently
during LPM.
Rename lockup_detector_reconfigure() as __lockup_detector_reconfigure() and
create a new function lockup_detector_reconfigure() calling
__lockup_detector_reconfigure() under the protection of watchdog_mutex.
Signed-off-by: Laurent Dufour <ldufour@linux.ibm.com>
[mpe: Squash in build fix from Laurent, reported by Sachin]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220713154729.80789-3-ldufour@linux.ibm.com
Laurent Dufour [Wed, 13 Jul 2022 15:47:26 +0000 (17:47 +0200)]
powerpc/mobility: wait for memory transfer to complete
In pseries_migration_partition(), loop until the memory transfer is
complete. This way the calling drmgr process will not exit earlier,
allowing callbacks to be run only once the migration is fully completed.
If reading the VASI state is done after the hypervisor has completed the
migration, the HCALL is returning H_PARAMETER. We can safely assume that
the memory transfer is achieved if this happens.
This will also allow to manage the NMI watchdog state in the next commits.
Signed-off-by: Laurent Dufour <ldufour@linux.ibm.com>
Reviewed-by: Nathan Lynch <nathanl@linux.ibm.com>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220713154729.80789-2-ldufour@linux.ibm.com
Michael Ellerman [Mon, 27 Jun 2022 14:02:39 +0000 (00:02 +1000)]
selftests/powerpc/ptrace: Add peek/poke of FPRs
Currently the ptrace-gpr test only tests the GET/SET(FP)REGS ptrace
APIs. But there's an alternate (older) API, called PEEK/POKEUSR.
Add some minimal testing of PEEK/POKEUSR of the FPRs. This is sufficient
to detect the bug that was fixed recently in the 32-bit ptrace FPR
handling.
Depends-on:
8e1278444446 ("powerpc/32: Fix overread/overwrite of thread_struct via ptrace")
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220627140239.2464900-13-mpe@ellerman.id.au
Michael Ellerman [Mon, 27 Jun 2022 14:02:38 +0000 (00:02 +1000)]
selftests/powerpc/ptrace: Use more interesting values
The ptrace-gpr test uses fixed values to test that registers can be
read/written via ptrace. In particular it sets all GPRs to 1, which
means the test could miss some types of bugs - eg. if the kernel was
only returning the low word.
So generate some random values at startup and use those instead.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220627140239.2464900-12-mpe@ellerman.id.au
Michael Ellerman [Mon, 27 Jun 2022 14:02:37 +0000 (00:02 +1000)]
selftests/powerpc/ptrace: Make child errors more obvious
Use the FAIL_IF() macro so that errors in the child report a line
number, rather than just silently exiting.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220627140239.2464900-11-mpe@ellerman.id.au
Michael Ellerman [Mon, 27 Jun 2022 14:02:36 +0000 (00:02 +1000)]
selftests/powerpc/ptrace: Do more of ptrace-gpr in asm
The ptrace-gpr test includes some inline asm to load GPR and FPR
registers. It then goes back to C to wait for the parent to trace it and
then checks register contents.
The split between inline asm and C is fragile, it relies on the compiler
not using any non-volatile GPRs after the inline asm block. It also
requires a very large and unwieldy inline asm block.
So convert the logic to set registers, wait, and store registers to a
single asm function, meaning there's no window for the compiler to
intervene.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220627140239.2464900-10-mpe@ellerman.id.au
Michael Ellerman [Mon, 27 Jun 2022 14:02:35 +0000 (00:02 +1000)]
selftests/powerpc/ptrace: Build the ptrace-gpr test as 32-bit when possible
The ptrace-gpr test can now be built 32-bit, so do that if that's the
compiler default rather than forcing a 64-bit build.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220627140239.2464900-9-mpe@ellerman.id.au
Michael Ellerman [Mon, 27 Jun 2022 14:02:34 +0000 (00:02 +1000)]
selftests/powerpc/ptrace: Convert to load/store doubles
Some of the ptrace tests check the contents of floating pointer
registers. Currently these use float, which is always 4 bytes, but the
ptrace API supports saving/restoring 8 bytes per register, so switch to
using doubles to exercise the code more fully.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220627140239.2464900-8-mpe@ellerman.id.au
Michael Ellerman [Mon, 27 Jun 2022 14:02:33 +0000 (00:02 +1000)]
selftests/powerpc/ptrace: Drop unused load_fpr_single_precision()
This function is never called, drop it.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220627140239.2464900-7-mpe@ellerman.id.au
Michael Ellerman [Mon, 27 Jun 2022 14:02:32 +0000 (00:02 +1000)]
selftests/powerpc: Add 32-bit support to asm helpers
Add support for 32-bit builds to the asm helpers.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220627140239.2464900-6-mpe@ellerman.id.au
Michael Ellerman [Mon, 27 Jun 2022 14:02:31 +0000 (00:02 +1000)]
selftests/powerpc: Don't save TOC by default in asm helpers
Thare are some asm helpers for creating/popping stack frames in
basic_asm.h. They always save/restore r2 (TOC pointer), but none of the
selftests change r2, so it's unnecessary to save it by default.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220627140239.2464900-5-mpe@ellerman.id.au
Michael Ellerman [Mon, 27 Jun 2022 14:02:30 +0000 (00:02 +1000)]
selftests/powerpc: Don't save CR by default in asm helpers
Thare are some asm helpers for creating/popping stack frames in
basic_asm.h. They always save/restore CR, but none of the selftests
tests touch non-volatile CR fields, so it's unnecessary to save them by
default.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220627140239.2464900-4-mpe@ellerman.id.au
Michael Ellerman [Mon, 27 Jun 2022 14:02:29 +0000 (00:02 +1000)]
selftests/powerpc/ptrace: Split CFLAGS better
Currently all ptrace tests are built 64-bit and with TM enabled.
Only the TM tests need TM enabled, so split those out into a separate
variable so that can be specified precisely.
Split the rest of the tests into a variable, and add -m64 to CFLAGS for
those tests, so that in a subsequent patch some tests can be made to
build 32-bit.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220627140239.2464900-3-mpe@ellerman.id.au
Michael Ellerman [Mon, 27 Jun 2022 14:02:28 +0000 (00:02 +1000)]
selftests/powerpc/ptrace: Set LOCAL_HDRS
Set LOCAL_HDRS so header changes cause rebuilds. The lib.mk logic adds
all the headers in LOCAL_HDRS as dependencies, so there's no need to
also list them explicitly.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220627140239.2464900-2-mpe@ellerman.id.au
Michael Ellerman [Mon, 27 Jun 2022 14:02:27 +0000 (00:02 +1000)]
selftests/powerpc: Ensure 16-byte stack pointer alignment
The PUSH/POP_BASIC_STACK helpers in basic_asm.h do not ensure that the
stack pointer is always 16-byte aligned, which is required per the ABI.
Fix the macros to do the alignment if the caller fails to.
Currently only one caller passes a non-aligned size, tm_signal_self(),
which hasn't been caught in testing, presumably because it's a leaf
function.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220627140239.2464900-1-mpe@ellerman.id.au
Michael Ellerman [Mon, 18 Jul 2022 09:51:58 +0000 (19:51 +1000)]
powerpc: Fix all occurences of duplicate words
Since commit
87c78b612f4f ("powerpc: Fix all occurences of "the the"")
fixed "the the", there's now a steady stream of patches fixing other
duplicate words.
Just fix them all at once, to save the overhead of dealing with
individual patches for each case.
This leaves a few cases of "that that", which in some contexts is
correct.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220718095158.326606-1-mpe@ellerman.id.au
Michael Ellerman [Mon, 25 Jul 2022 02:04:44 +0000 (12:04 +1000)]
Merge branch 'fixes' into next
Bring in a build fix for GCC12 from our fixes branch.
Ning Qiang [Wed, 13 Jul 2022 15:37:34 +0000 (23:37 +0800)]
macintosh/adb: fix oob read in do_adb_query() function
In do_adb_query() function of drivers/macintosh/adb.c, req->data is copied
form userland. The parameter "req->data[2]" is missing check, the array
size of adb_handler[] is 16, so adb_handler[req->data[2]].original_address and
adb_handler[req->data[2]].handler_id will lead to oob read.
Cc: stable <stable@kernel.org>
Signed-off-by: Ning Qiang <sohu0106@126.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220713153734.2248-1-sohu0106@126.com
Scott Cheloha [Wed, 13 Jul 2022 20:23:35 +0000 (15:23 -0500)]
watchdog/pseries-wdt: initial support for H_WATCHDOG-based watchdog timers
PAPR v2.12 defines a new hypercall, H_WATCHDOG. The hypercall permits
guest control of one or more virtual watchdog timers. The timers have
millisecond granularity. The guest is terminated when a timer
expires.
This patch adds a watchdog driver for these timers, "pseries-wdt".
pseries_wdt_probe() currently assumes the existence of only one
platform device and always assigns it watchdogNumber 1. If we ever
expose more than one timer to userspace we will need to devise a way
to assign a distinct watchdogNumber to each platform device at device
registration time.
Signed-off-by: Scott Cheloha <cheloha@linux.ibm.com>
Acked-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220713202335.1217647-5-cheloha@linux.ibm.com
Scott Cheloha [Wed, 13 Jul 2022 20:23:34 +0000 (15:23 -0500)]
powerpc/pseries: register pseries-wdt device with platform bus
PAPR v2.12 defines a new hypercall, H_WATCHDOG. The hypercall permits
guest control of one or more virtual watchdog timers.
These timers do not conform to PowerPC device conventions. They are
not affixed to any extant bus, nor do they have full representation in
the device tree.
As a workaround we represent them as platform devices.
This patch registers a single platform device, "pseries-wdt", with the
platform bus if the FW_FEATURE_WATCHDOG flag is set.
A driver for this device, "pseries-wdt", will be introduced in a
subsequent patch.
Signed-off-by: Scott Cheloha <cheloha@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220713202335.1217647-4-cheloha@linux.ibm.com
Scott Cheloha [Wed, 13 Jul 2022 20:23:33 +0000 (15:23 -0500)]
powerpc/pseries: add FW_FEATURE_WATCHDOG flag
PAPR v2.12 specifies a new optional function set, "hcall-watchdog",
for the /rtas/ibm,hypertas-functions property. The presence of this
function set indicates support for the H_WATCHDOG hypercall.
Check for this function set and, if present, set the new
FW_FEATURE_WATCHDOG flag.
Signed-off-by: Scott Cheloha <cheloha@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220713202335.1217647-3-cheloha@linux.ibm.com
Scott Cheloha [Wed, 13 Jul 2022 20:23:32 +0000 (15:23 -0500)]
powerpc/pseries: hvcall.h: add H_WATCHDOG opcode, H_NOOP return code
PAPR v2.12 defines a new hypercall, H_WATCHDOG. The hypercall permits
guest control of one or more virtual watchdog timers.
Add the opcode for the H_WATCHDOG hypercall to hvcall.h. While here,
add a definition for H_NOOP, a possible return code for H_WATCHDOG.
Signed-off-by: Scott Cheloha <cheloha@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220713202335.1217647-2-cheloha@linux.ibm.com
Michael Ellerman [Mon, 18 Jul 2022 13:44:18 +0000 (23:44 +1000)]
powerpc/64s: Disable stack variable initialisation for prom_init
With GCC 12 allmodconfig prom_init fails to build:
Error: External symbol 'memset' referenced from prom_init.c
make[2]: *** [arch/powerpc/kernel/Makefile:204: arch/powerpc/kernel/prom_init_check] Error 1
The allmodconfig build enables KASAN, so all calls to memset in
prom_init should be converted to __memset by the #ifdefs in
asm/string.h, because prom_init must use the non-KASAN instrumented
versions.
The build failure happens because there's a call to memset that hasn't
been caught by the pre-processor and converted to __memset. Typically
that's because it's a memset generated by the compiler itself, and that
is the case here.
With GCC 12, allmodconfig enables CONFIG_INIT_STACK_ALL_PATTERN, which
causes the compiler to emit memset calls to initialise on-stack
variables with a pattern.
Because prom_init is non-user-facing boot-time only code, as a
workaround just disable stack variable initialisation to unbreak the
build.
Reported-by: Sudip Mukherjee <sudipm.mukherjee@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220718134418.354114-1-mpe@ellerman.id.au
Uwe Kleine-König [Sun, 12 Jun 2022 21:34:00 +0000 (23:34 +0200)]
powerpc/52xx: Mark gpt driver as not removable
Returning an error code (here -EBUSY) from a remove callback doesn't
prevent the driver from being unloaded. The only effect is that an error
message is emitted and the driver is removed anyhow.
So instead drop the remove function (which is equivalent to returning zero)
and set the suppress_bind_attrs property to make it impossible to unload
the driver via sysfs.
This is a preparation for making platform remove callbacks return void.
Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220612213400.159257-1-u.kleine-koenig@pengutronix.de
Athira Rajeev [Fri, 20 May 2022 08:46:30 +0000 (14:16 +0530)]
docs: ABI: sysfs-bus-event_source-devices: Document sysfs caps entry for PMU
Details is added about "caps" attribute group in the ABI documentation.
This is used to expose some of the PMU attributes in "caps"
directory under : /sys/bus/event_source/devices/<dev>/. The dev/caps
will contain information about features that platform specific PMU
supports.
Signed-off-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220520084630.15181-2-atrajeev@linux.vnet.ibm.com
Athira Rajeev [Fri, 20 May 2022 08:46:29 +0000 (14:16 +0530)]
powerpc/perf: Add support for caps under sysfs in powerpc
Add caps support under "/sys/bus/event_source/devices/<pmu>/"
for powerpc. This directory can be used to expose some of the
specific features that powerpc PMU supports to the user.
Example: pmu_name. The name of PMU registered will depend on
platform, say power9 or power10 or it could be Generic Compat
PMU.
Currently the only way to know which is the registered
PMU is from the dmesg logs. But clearing the dmesg will make it
difficult to know exact PMU backend used. And even extracting
from dmesg will be complicated, as we need to parse the dmesg
logs and add filters for pmu name. Whereas by exposing it via
caps will make it easy as we just need to directly read it from
the sysfs.
Add a caps directory to /sys/bus/event_source/devices/cpu/
for power8, power9, power10 and generic compat PMU in respective
PMU driver code. Update the pmu_name file under caps folder
in core-book3s using "attr_update".
The information exposed currently:
- pmu_name : Underlying PMU name from the driver
Example result with power9 pmu:
# ls /sys/bus/event_source/devices/cpu/caps
pmu_name
# cat /sys/bus/event_source/devices/cpu/caps/pmu_name
POWER9
Signed-off-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220520084630.15181-1-atrajeev@linux.vnet.ibm.com
Joel Stanley [Fri, 10 Jun 2022 04:40:06 +0000 (14:10 +0930)]
powerpc/perf: Give generic PMU a nice name
When booting on a machine that uses the compat pmu driver we see this:
[ 0.071192] GENERIC_COMPAT performance monitor hardware support registered
Which is a bit shouty. Give it a nicer name.
Signed-off-by: Joel Stanley <joel@jms.id.au>
Reviewed-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220610044006.2095806-1-joel@jms.id.au
Michael Ellerman [Sat, 9 Jul 2022 09:32:20 +0000 (19:32 +1000)]
Merge branch 'topic/ppc-kvm' into next
Merge KVM related commits we are keeping in a topic branch in case of
any conflicts with generic KVM changes.
Michael Ellerman [Sat, 9 Jul 2022 09:29:34 +0000 (19:29 +1000)]
Merge branch 'fixes' into next
Merge our fixes branch. In particular this brings in commit
986481618023 ("powerpc/book3e: Fix PUD allocation size in
map_kernel_page()") which fixes a build failure in next, because commit
2db2008e6363 ("powerpc/64e: Rewrite p4d_populate() as a static inline
function") depends on it.
Jason A. Donenfeld [Thu, 30 Jun 2022 12:16:54 +0000 (14:16 +0200)]
powerpc/powernv: delay rng platform device creation until later in boot
The platform device for the rng must be created much later in boot.
Otherwise it tries to connect to a parent that doesn't yet exist,
resulting in this splat:
[ 0.000478] kobject: '(null)' ((____ptrval____)): is not initialized, yet kobject_get() is being called.
[ 0.002925] [
c000000002a0fb30] [
c00000000073b0bc] kobject_get+0x8c/0x100 (unreliable)
[ 0.003071] [
c000000002a0fba0] [
c00000000087e464] device_add+0xf4/0xb00
[ 0.003194] [
c000000002a0fc80] [
c000000000a7f6e4] of_device_add+0x64/0x80
[ 0.003321] [
c000000002a0fcb0] [
c000000000a800d0] of_platform_device_create_pdata+0xd0/0x1b0
[ 0.003476] [
c000000002a0fd00] [
c00000000201fa44] pnv_get_random_long_early+0x240/0x2e4
[ 0.003623] [
c000000002a0fe20] [
c000000002060c38] random_init+0xc0/0x214
This patch fixes the issue by doing the platform device creation inside
of machine_subsys_initcall.
Fixes:
f3eac426657d ("powerpc/powernv: wire up rng during setup_arch")
Cc: stable@vger.kernel.org
Reported-by: Sachin Sant <sachinp@linux.ibm.com>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Tested-by: Sachin Sant <sachinp@linux.ibm.com>
[mpe: Change "of node" to "platform device" in change log]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220630121654.1939181-1-Jason@zx2c4.com
Aneesh Kumar K.V [Wed, 29 Jun 2022 05:09:25 +0000 (10:39 +0530)]
powerpc/memhotplug: Add add_pages override for PPC
With commit
ffa0b64e3be5 ("powerpc: Fix virt_addr_valid() for 64-bit Book3E & 32-bit")
the kernel now validate the addr against high_memory value. This results
in the below BUG_ON with dax pfns.
[ 635.798741][T26531] kernel BUG at mm/page_alloc.c:5521!
1:mon> e
cpu 0x1: Vector: 700 (Program Check) at [
c000000007287630]
pc:
c00000000055ed48: free_pages.part.0+0x48/0x110
lr:
c00000000053ca70: tlb_finish_mmu+0x80/0xd0
sp:
c0000000072878d0
msr:
800000000282b033
current = 0xc00000000afabe00
paca = 0xc00000037ffff300 irqmask: 0x03 irq_happened: 0x05
pid = 26531, comm = 50-landscape-sy
kernel BUG at :5521!
Linux version 5.19.0-rc3-14659-g4ec05be7c2e1 (kvaneesh@ltc-boston8) (gcc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0, GNU ld (GNU Binutils for Ubuntu) 2.34) #625 SMP Thu Jun 23 00:35:43 CDT 2022
1:mon> t
[link register ]
c00000000053ca70 tlb_finish_mmu+0x80/0xd0
[
c0000000072878d0]
c00000000053ca54 tlb_finish_mmu+0x64/0xd0 (unreliable)
[
c000000007287900]
c000000000539424 exit_mmap+0xe4/0x2a0
[
c0000000072879e0]
c00000000019fc1c mmput+0xcc/0x210
[
c000000007287a20]
c000000000629230 begin_new_exec+0x5e0/0xf40
[
c000000007287ae0]
c00000000070b3cc load_elf_binary+0x3ac/0x1e00
[
c000000007287c10]
c000000000627af0 bprm_execve+0x3b0/0xaf0
[
c000000007287cd0]
c000000000628414 do_execveat_common.isra.0+0x1e4/0x310
[
c000000007287d80]
c00000000062858c sys_execve+0x4c/0x60
[
c000000007287db0]
c00000000002c1b0 system_call_exception+0x160/0x2c0
[
c000000007287e10]
c00000000000c53c system_call_common+0xec/0x250
The fix is to make sure we update high_memory on memory hotplug.
This is similar to what x86 does in commit
3072e413e305 ("mm/memory_hotplug: introduce add_pages")
Fixes:
ffa0b64e3be5 ("powerpc: Fix virt_addr_valid() for 64-bit Book3E & 32-bit")
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Reviewed-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220629050925.31447-1-aneesh.kumar@linux.ibm.com
Naveen N. Rao [Mon, 27 Jun 2022 19:11:19 +0000 (00:41 +0530)]
powerpc/bpf: Fix use of user_pt_regs in uapi
Trying to build a .c file that includes <linux/bpf_perf_event.h>:
$ cat test_bpf_headers.c
#include <linux/bpf_perf_event.h>
throws the below error:
/usr/include/linux/bpf_perf_event.h:14:28: error: field ‘regs’ has incomplete type
14 | bpf_user_pt_regs_t regs;
| ^~~~
This is because we typedef bpf_user_pt_regs_t to 'struct user_pt_regs'
in arch/powerpc/include/uaps/asm/bpf_perf_event.h, but 'struct
user_pt_regs' is not exposed to userspace.
Powerpc has both pt_regs and user_pt_regs structures. However, unlike
arm64 and s390, we expose user_pt_regs to userspace as just 'pt_regs'.
As such, we should typedef bpf_user_pt_regs_t to 'struct pt_regs' for
userspace.
Within the kernel though, we want to typedef bpf_user_pt_regs_t to
'struct user_pt_regs'.
Remove arch/powerpc/include/uapi/asm/bpf_perf_event.h so that the
uapi/asm-generic version of the header is exposed to userspace.
Introduce arch/powerpc/include/asm/bpf_perf_event.h so that we can
typedef bpf_user_pt_regs_t to 'struct user_pt_regs' for use within the
kernel.
Note that this was not showing up with the bpf selftest build since
tools/include/uapi/asm/bpf_perf_event.h didn't include the powerpc
variant.
Fixes:
a6460b03f945ee ("powerpc/bpf: Fix broken uapi for BPF_PROG_TYPE_PERF_EVENT")
Cc: stable@vger.kernel.org # v4.20+
Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
[mpe: Use typical naming for header include guard]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220627191119.142867-1-naveen.n.rao@linux.vnet.ibm.com
Pali Rohár [Fri, 24 Jun 2022 08:55:50 +0000 (10:55 +0200)]
powerpc: dts: Add DTS file for CZ.NIC Turris 1.x routers
CZ.NIC Turris 1.0 and 1.1 are open source routers, they have dual-core
PowerPC Freescale P2020 CPU and are based on Freescale P2020RDB-PC-A board.
Hardware design is fully open source, all firmware and hardware design
files are available at Turris project website:
https://docs.turris.cz/hw/turris-1x/turris-1x/
https://project.turris.cz/en/hardware.html
Signed-off-by: Pali Rohár <pali@kernel.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220624085550.20570-1-pali@kernel.org
Juerg Haefliger [Fri, 20 May 2022 11:54:31 +0000 (13:54 +0200)]
KVM: PPC: Kconfig: Fix indentation
The convention for indentation seems to be a single tab. Help text is
further indented by an additional two whitespaces. Fix the lines that
violate these rules.
Signed-off-by: Juerg Haefliger <juergh@canonical.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220520115431.147593-1-juergh@canonical.com
Juerg Haefliger [Fri, 20 May 2022 11:52:29 +0000 (13:52 +0200)]
powerpc/powernv: Kconfig: Replace single quotes
Replace single quotes with double quotes which seems to be the convention
for strings.
Signed-off-by: Juerg Haefliger <juergh@canonical.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220520115229.147368-1-juergh@canonical.com
Juerg Haefliger [Thu, 26 May 2022 06:57:37 +0000 (08:57 +0200)]
powerpc: Kconfig.debug: Remove extra empty line
Remove a stray extra empty line.
Signed-off-by: Juerg Haefliger <juerg.haefliger@canonical.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220526065737.86370-3-juerg.haefliger@canonical.com
Juerg Haefliger [Thu, 26 May 2022 06:57:36 +0000 (08:57 +0200)]
powerpc: Kconfig: Replace tabs with whitespaces
Replace tabs after keywords with whitespaces to be consistent.
Signed-off-by: Juerg Haefliger <juerg.haefliger@canonical.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220526065737.86370-2-juerg.haefliger@canonical.com
Madhavan Srinivasan [Thu, 29 Apr 2021 05:02:08 +0000 (10:32 +0530)]
powerpc/perf: Update MMCR2 to support event exclude_idle
struct perf_event_attr supports exclude counting of idle task.
This is sent to kernel via perf_event_attr.exclude_idle and
in perf tool, user can use ":I" event modifier to enable this
for specific event.
Monitor Mode Control Register 2 (MMCR2) SPR has control bits
for each PMCs to freeze counting based on the Control Register
CTRL[RUN] state. CTRL[RUN] is not set when idle task is
running. Patch adds a check for event attr.exclude_idle to
set MMCR2[FCnWAIT] bit.
Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210429050208.266619-1-maddy@linux.ibm.com
Alexey Kardashevskiy [Wed, 1 Jun 2022 04:01:17 +0000 (14:01 +1000)]
powerpc/pseries/iommu: Print ibm,query-pe-dma-windows parameters
PowerVM has a stricter policy about allocating TCEs for LPARs and
often there is not enough TCEs for 1:1 mapping, this adds the supported
numbers into dev_info() to help analyzing bugreports.
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220601040117.1467710-1-aik@ozlabs.ru
Alexey Kardashevskiy [Tue, 28 Jun 2022 08:02:28 +0000 (18:02 +1000)]
KVM: PPC: Do not warn when userspace asked for too big TCE table
KVM manages emulated TCE tables for guest LIOBNs by a two level table
which maps up to 128TiB with 16MB IOMMU pages (enabled in QEMU by default)
and MAX_ORDER=11 (the kernel's default). Note that the last level of
the table is allocated when actual TCE is updated.
However these tables are created via ioctl() on kvmfd and the userspace
can trigger WARN_ON_ONCE_GFP(order >= MAX_ORDER, gfp) in mm/page_alloc.c
and flood dmesg.
This adds __GFP_NOWARN.
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220628080228.1508847-1-aik@ozlabs.ru
Hari Bathini [Fri, 10 Jun 2022 15:55:52 +0000 (21:25 +0530)]
powerpc/bpf/32: Add instructions for atomic_[cmp]xchg
This adds two atomic opcodes BPF_XCHG and BPF_CMPXCHG on ppc32, both
of which include the BPF_FETCH flag. The kernel's atomic_cmpxchg
operation fundamentally has 3 operands, but we only have two register
fields. Therefore the operand we compare against (the kernel's API
calls it 'old') is hard-coded to be BPF_REG_R0. Also, kernel's
atomic_cmpxchg returns the previous value at dst_reg + off. JIT the
same for BPF too with return value put in BPF_REG_0.
BPF_REG_R0 = atomic_cmpxchg(dst_reg + off, BPF_REG_R0, src_reg);
Signed-off-by: Hari Bathini <hbathini@linux.ibm.com>
Tested-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> (ppc64le)
Reviewed-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220610155552.25892-6-hbathini@linux.ibm.com
Hari Bathini [Fri, 10 Jun 2022 15:55:51 +0000 (21:25 +0530)]
powerpc/bpf/32: add support for BPF_ATOMIC bitwise operations
Adding instructions for ppc32 for
atomic_and
atomic_or
atomic_xor
atomic_fetch_add
atomic_fetch_and
atomic_fetch_or
atomic_fetch_xor
Signed-off-by: Hari Bathini <hbathini@linux.ibm.com>
Tested-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> (ppc64le)
Reviewed-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220610155552.25892-5-hbathini@linux.ibm.com
Hari Bathini [Fri, 10 Jun 2022 15:55:50 +0000 (21:25 +0530)]
powerpc/bpf/64: Add instructions for atomic_[cmp]xchg
This adds two atomic opcodes BPF_XCHG and BPF_CMPXCHG on ppc64, both
of which include the BPF_FETCH flag. The kernel's atomic_cmpxchg
operation fundamentally has 3 operands, but we only have two register
fields. Therefore the operand we compare against (the kernel's API
calls it 'old') is hard-coded to be BPF_REG_R0. Also, kernel's
atomic_cmpxchg returns the previous value at dst_reg + off. JIT the
same for BPF too with return value put in BPF_REG_0.
BPF_REG_R0 = atomic_cmpxchg(dst_reg + off, BPF_REG_R0, src_reg);
Signed-off-by: Hari Bathini <hbathini@linux.ibm.com>
Tested-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> (ppc64le)
Reviewed-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220610155552.25892-4-hbathini@linux.ibm.com
Hari Bathini [Fri, 10 Jun 2022 15:55:49 +0000 (21:25 +0530)]
powerpc/bpf/64: add support for atomic fetch operations
Adding instructions for ppc64 for
atomic[64]_fetch_add
atomic[64]_fetch_and
atomic[64]_fetch_or
atomic[64]_fetch_xor
Signed-off-by: Hari Bathini <hbathini@linux.ibm.com>
Tested-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> (ppc64le)
Reviewed-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220610155552.25892-3-hbathini@linux.ibm.com
Hari Bathini [Fri, 10 Jun 2022 15:55:48 +0000 (21:25 +0530)]
powerpc/bpf/64: add support for BPF_ATOMIC bitwise operations
Adding instructions for ppc64 for
atomic[64]_and
atomic[64]_or
atomic[64]_xor
Signed-off-by: Hari Bathini <hbathini@linux.ibm.com>
Tested-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> (ppc64le)
Reviewed-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220610155552.25892-2-hbathini@linux.ibm.com
Laurent Dufour [Mon, 23 May 2022 16:43:53 +0000 (18:43 +0200)]
powerpc/64s: Don't read H_BLOCK_REMOVE characteristics in radix mode
There is no need to read the H_BLOCK_REMOVE characteristics when running in
Radix mode because this hcall is never called.
Furthermore since the commit
387e220a2e5e ("powerpc/64s: Move hash MMU
support code under CONFIG_PPC_64S_HASH_MMU") define
pseries_lpar_read_hblkrm_characteristics as un empty function if
CONFIG_PPC_64S_HASH_MMU is not set, the #ifdef block can be removed.
Signed-off-by: Laurent Dufour <ldufour@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220523164353.26441-1-ldufour@linux.ibm.com
Haowen Bai [Tue, 31 May 2022 09:19:50 +0000 (17:19 +0800)]
powerpc/papr_scm: use dev_get_drvdata
Eliminate direct accesses to the driver_data field.
Signed-off-by: Haowen Bai <baihaowen@meizu.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/1653988790-19999-1-git-send-email-baihaowen@meizu.com
Michael Ellerman [Tue, 31 May 2022 06:59:36 +0000 (16:59 +1000)]
powerpc/64: Drop ppc_inst_as_str()
The ppc_inst_as_str() macro tries to make printing variable length,
aka "prefixed", instructions convenient. It mostly succeeds, but it does
hide an on-stack buffer, which triggers stack protector.
More problematically it doesn't compile at all with GCC 12,
with -Wdangling-pointer, due to the fact that it returns the char buffer
declared inside the macro:
arch/powerpc/kernel/trace/ftrace.c: In function '__ftrace_modify_call':
./include/linux/printk.h:475:44: error: using a dangling pointer to '__str' [-Werror=dangling-pointer=]
475 | #define printk(fmt, ...) printk_index_wrap(_printk, fmt, ##__VA_ARGS__)
...
arch/powerpc/kernel/trace/ftrace.c:567:17: note: in expansion of macro 'pr_err'
567 | pr_err("Not expected bl: opcode is %s\n", ppc_inst_as_str(op));
| ^~~~~~
./arch/powerpc/include/asm/inst.h:156:14: note: '__str' declared here
156 | char __str[PPC_INST_STR_LEN]; \
| ^~~~~
This could be fixed by having the caller declare the buffer, but in some
places there'd need to be two buffers. In all cases where
ppc_inst_as_str() is used the output is not really meant for user
consumption, it's almost always indicative of a kernel bug.
A simpler solution is to just print the value as an unsigned long. For
normal instructions the output is identical. For prefixed instructions
the value is printed as a single 64-bit quantity, whereas previously the
low half was printed first. But that is good enough for debug output,
especially as prefixed instructions will be rare in kernel code in
practice.
Old:
c000000000111170 60420000 ori r2,r2,0
c000000000111174 04100001 e580fb00 .long 0xe580fb0004100001
New:
c00000000010f90c 60420000 ori r2,r2,0
c00000000010f910 e580fb0004100001 .long 0xe580fb0004100001
Reported-by: Bagas Sanjaya <bagasdotme@gmail.com>
Reported-by: Petr Mladek <pmladek@suse.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Tested-by: Bagas Sanjaya <bagasdotme@gmail.com>
Link: https://lore.kernel.org/r/20220531065936.3674348-1-mpe@ellerman.id.au
Michael Ellerman [Thu, 16 Jun 2022 07:07:05 +0000 (17:07 +1000)]
selftests/powerpc: Add missing files to .gitignores
These were missed when the respective tests were added, add them now.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220616070705.1941829-1-mpe@ellerman.id.au
Fabiano Rosas [Fri, 24 Jun 2022 14:27:12 +0000 (11:27 -0300)]
KVM: PPC: Align pt_regs in kvm_vcpu_arch structure
The H_ENTER_NESTED hypercall receives as second parameter the address
of a region of memory containing the values for the nested guest
privileged registers. We currently use the pt_regs structure contained
within kvm_vcpu_arch for that end.
Most hypercalls that receive a memory address expect that region to
not cross a 4K page boundary. We would want H_ENTER_NESTED to follow
the same pattern so this patch ensures the pt_regs structure sits
within a page.
Note: the pt_regs structure is currently 384 bytes in size, so
aligning to 512 is sufficient to ensure it will not cross a 4K page
and avoids punching too big a hole in struct kvm_vcpu_arch.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Signed-off-by: Murilo Opsfelder Araújo <muriloo@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220624142712.790491-1-farosas@linux.ibm.com
Fabiano Rosas [Tue, 14 Jun 2022 16:52:04 +0000 (13:52 -0300)]
KVM: PPC: Book3S HV: tracing: Add missing hcall names
The kvm_trace_symbol_hcall macro is missing several of the hypercalls
defined in hvcall.h.
Add the most common ones that are issued during guest lifetime,
including the ones that are only used by QEMU and SLOF.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220614165204.549229-1-farosas@linux.ibm.com
Fabiano Rosas [Wed, 25 May 2022 13:05:54 +0000 (10:05 -0300)]
KVM: PPC: Book3S HV: Provide more detailed timings for P9 entry path
Alter the data collection points for the debug timing code in the P9
path to be more in line with what the code does. The points where we
accumulate time are now the following:
vcpu_entry: From vcpu_run_hv entry until the start of the inner loop;
guest_entry: From the start of the inner loop until the guest entry in
asm;
in_guest: From the guest entry in asm until the return to KVM C code;
guest_exit: From the return into KVM C code until the corresponding
hypercall/page fault handling or re-entry into the guest;
hypercall: Time spent handling hcalls in the kernel (hcalls can go to
QEMU, not accounted here);
page_fault: Time spent handling page faults;
vcpu_exit: vcpu_run_hv exit (almost no code here currently).
Like before, these are exposed in debugfs in a file called
"timings". There are four values:
- number of occurrences of the accumulation point;
- total time the vcpu spent in the phase in ns;
- shortest time the vcpu spent in the phase in ns;
- longest time the vcpu spent in the phase in ns;
===
Before:
rm_entry: 53132
16793518 256 4060
rm_intr: 53132 2125914 22 340
rm_exit: 53132
24108344 374 2180
guest: 53132
40980507996 404 9997650
cede: 0 0 0 0
After:
vcpu_entry: 34637 7716108 178 4416
guest_entry: 52414
49365608 324 747542
in_guest: 52411
40828715840 258 9997480
guest_exit: 52410
19681717182 826
102496674
vcpu_exit: 34636 1744462 38 182
hypercall: 45712
22878288 38 1307962
page_fault: 992
111104034 568 168688
With just one instruction (hcall):
vcpu_entry: 1 942 942 942
guest_entry: 1 4044 4044 4044
in_guest: 1 1540 1540 1540
guest_exit: 1 3542 3542 3542
vcpu_exit: 1 80 80 80
hypercall: 0 0 0 0
page_fault: 0 0 0 0
===
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220525130554.2614394-6-farosas@linux.ibm.com
Fabiano Rosas [Wed, 25 May 2022 13:05:53 +0000 (10:05 -0300)]
KVM: PPC: Book3S HV: Expose timing functions to module code
The next patch adds new timing points to the P9 entry path, some of
which are in the module code, so we need to export the timing
functions.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220525130554.2614394-5-farosas@linux.ibm.com
Fabiano Rosas [Wed, 25 May 2022 13:05:52 +0000 (10:05 -0300)]
KVM: PPC: Book3S HV: Decouple the debug timing from the P8 entry path
We are currently doing the timing for debug purposes of the P9 entry
path using the accumulators and terminology defined by the old entry
path for P8 machines.
Not only the "real-mode" and "napping" mentions are out of place for
the P9 Radix entry path but also we cannot change them because the
timing code is coupled to the structures defined in struct
kvm_vcpu_arch.
Add a new CONFIG_KVM_BOOK3S_HV_P9_TIMING to enable the timing code for
the P9 entry path. For now, just add the new CONFIG and duplicate the
structures. A subsequent patch will add the P9 changes.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220525130554.2614394-4-farosas@linux.ibm.com
Fabiano Rosas [Wed, 25 May 2022 13:05:51 +0000 (10:05 -0300)]
KVM: PPC: Book3S HV: Add a new config for P8 debug timing
Turn the existing Kconfig KVM_BOOK3S_HV_EXIT_TIMING into
KVM_BOOK3S_HV_P8_TIMING in preparation for the addition of a new
config for P9 timings.
This applies only to P8 code, the generic timing code is still kept
under KVM_BOOK3S_HV_EXIT_TIMING.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220525130554.2614394-3-farosas@linux.ibm.com
Fabiano Rosas [Wed, 25 May 2022 13:05:50 +0000 (10:05 -0300)]
KVM: PPC: Book3S HV: Fix "rm_exit" entry in debugfs timings
At debugfs/kvm/<pid>/vcpu0/timings we show how long each part of the
code takes to run:
$ cat /sys/kernel/debug/kvm/*-*/vcpu0/timings
rm_entry: 123785
49398892 118 4898
rm_intr: 123780 6075890 22 390
rm_exit: 0 0 0 0 <-- NOK
guest: 123780
46732919988 402 9997638
cede: 0 0 0 0 <-- OK, no cede napping in P9
The "rm_exit" is always showing zero because it is the last one and
end_timing does not increment the counter of the previous entry.
We can fix it by calling accumulate_time again instead of
end_timing. That way the counter gets incremented. The rest of the
arithmetic can be ignored because there are no timing points after
this and the accumulators are reset before the next round.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220525130554.2614394-2-farosas@linux.ibm.com
Christophe Leroy [Tue, 28 Jun 2022 14:48:59 +0000 (16:48 +0200)]
powerpc/64e: KASAN Full support for BOOK3E/64
We now have memory organised in a way that allows
implementing KASAN.
Unlike book3s/64, book3e always has translation active so the only
thing needed to use KASAN is to setup an early zero shadow mapping
just after setting a stack pointer and before calling early_setup().
The memory layout is now as follows
+------------------------+ Kernel virtual map end (0xc000200000000000)
| |
| 16TB of KASAN map |
| |
+------------------------+ Kernel KASAN shadow map start
| |
| 16TB of IO map |
| |
+------------------------+ Kernel IO map start
| |
| 16TB of vmemmap |
| |
+------------------------+ Kernel vmemmap start
| |
| 16TB of vmap |
| |
+------------------------+ Kernel virt start (0xc000100000000000)
| |
| 64TB of linear mem |
| |
+------------------------+ Kernel linear (0xc.....)
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/0bef8beda27baf71e3b9e8b13e620fba6e19499b.1656427701.git.christophe.leroy@csgroup.eu
Christophe Leroy [Tue, 28 Jun 2022 14:48:58 +0000 (16:48 +0200)]
powerpc/64e: Reorganise virtual memory
Reduce the size of IO map in order to leave the last
quarter of virtual MAP for KASAN shadow mapping.
This gives the following layout.
+------------------------+ Kernel virtual map end (0xc000200000000000)
| |
| 16TB (unused) |
| |
+------------------------+ Kernel IO map end
| |
| 16TB of IO map |
| |
+------------------------+ Kernel IO map start
| |
| 16TB of vmemmap |
| |
+------------------------+ Kernel vmemmap start
| |
| 16TB of vmap |
| |
+------------------------+ Kernel virt start (0xc000100000000000)
| |
| 64TB of linear mem |
| |
+------------------------+ Kernel linear (0xc.....)
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/54ef01673bf14228106afd629f795c83acb9a00c.1656427701.git.christophe.leroy@csgroup.eu
Christophe Leroy [Tue, 28 Jun 2022 14:48:57 +0000 (16:48 +0200)]
powerpc/64e: Move virtual memory closer to linear memory
Today nohash/64 have linear memory based at 0xc000000000000000 and
virtual memory based at 0x8000000000000000.
In order to implement KASAN, we need to regroup both areas.
Move virtual memmory at 0xc000100000000000.
This complicates a bit TLB miss handlers. Until now, memory region
was easily identified with the 4 higher bits of address:
- 0 ==> User
- c ==> Linear Memory
- 8 ==> Virtual Memory
Now we need to rely on the 20 higher bits, with:
- 0xxxx ==> User
- c0000 ==> Linear Memory
- c0001 ==> Virtual Memory
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/4b225168031449fc34fc7132f3923cc8dc54af60.1656427701.git.christophe.leroy@csgroup.eu
Christophe Leroy [Tue, 28 Jun 2022 14:48:56 +0000 (16:48 +0200)]
powerpc/64e: Remove unused REGION related macros
Those macros are not used anywhere. Remove them as they are soon
going to be wrong and are not worth modifying as they are not used.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/f0efde8cee0924c3991790042b176ac77ad35e1f.1656427701.git.christophe.leroy@csgroup.eu
Christophe Leroy [Tue, 28 Jun 2022 14:48:55 +0000 (16:48 +0200)]
powerpc/64e: Remove MMU_FTR_USE_TLBRSRV and MMU_FTR_USE_PAIRED_MAS
Commit
fb5a515704d7 ("powerpc: Remove platforms/wsp and associated
pieces") removed the last CPU having features MMU_FTRS_A2 and
commit
cd68098bcedd ("powerpc: Clean up MMU_FTRS_A2 and
MMU_FTR_TYPE_3E") removed MMU_FTRS_A2 which was the last user of
MMU_FTR_USE_TLBRSRV and MMU_FTR_USE_PAIRED_MAS.
Remove all code that relies on MMU_FTR_USE_TLBRSRV and
MMU_FTR_USE_PAIRED_MAS.
With this change done, TLB miss can happen before the mmu feature
fixups.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/cfd5a0ecdb1598da968832e1bddf7431ec267200.1656427701.git.christophe.leroy@csgroup.eu
Christophe Leroy [Tue, 28 Jun 2022 14:48:54 +0000 (16:48 +0200)]
powerpc/64e: Fix early TLB miss with KUAP
With KUAP, the TLB miss handler bails out when an access to user
memory is performed with a nul TID.
But the normal TLB miss routine which is only used early during boot
does the check regardless for all memory areas, not only user memory.
By chance there is no early IO or vmalloc access, but when KASAN
come we will start having early TLB misses.
Fix it by creating a special branch for user accesses similar to the
one in the 'bolted' TLB miss handlers. Unfortunately SPRN_MAS1 is
now read too early and there are no registers available to preserve
it so it will be read a second time.
Fixes:
57bc963837f5 ("powerpc/kuap: Wire-up KUAP on book3e/64")
Cc: stable@vger.kernel.org
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/8d6c5859a45935d6e1a336da4dc20be421e8cea7.1656427701.git.christophe.leroy@csgroup.eu
Christophe Leroy [Tue, 28 Jun 2022 14:43:35 +0000 (16:43 +0200)]
powerpc/ptdump: Fix display of RW pages on FSL_BOOK3E
On FSL_BOOK3E, _PAGE_RW is defined with two bits, one for user and one
for supervisor. As soon as one of the two bits is set, the page has
to be display as RW. But the way it is implemented today requires both
bits to be set in order to display it as RW.
Instead of display RW when _PAGE_RW bits are set and R otherwise,
reverse the logic and display R when _PAGE_RW bits are all 0 and
RW otherwise.
This change has no impact on other platforms as _PAGE_RW is a single
bit on all of them.
Fixes:
8eb07b187000 ("powerpc/mm: Dump linux pagetables")
Cc: stable@vger.kernel.org
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/0c33b96317811edf691e81698aaee8fa45ec3449.1656427391.git.christophe.leroy@csgroup.eu
Christophe Leroy [Thu, 23 Jun 2022 08:56:57 +0000 (10:56 +0200)]
powerpc/64e: Rewrite p4d_populate() as a static inline function
Rewrite p4d_populate() as a static inline function instead of
a macro.
This change allows typechecking and would have helped detecting
a recently found bug in map_kernel_page().
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Acked-by: Mike Rapoport <rppt@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/1b416f8a8fe1bc3f4e01175680ce310b7eb3a1e4.1655974565.git.christophe.leroy@csgroup.eu
Christophe Leroy [Mon, 20 Jun 2022 06:21:22 +0000 (08:21 +0200)]
powerpc: Remove _PAGE_SAO stub for book3e/64
Since commit
634093c59a12 ("powerpc/mm: enable
ARCH_HAS_VM_GET_PAGE_PROT"), _PAGE_SAO is used only in
arch/powerpc/mm/book3s64/pgtable.c
The _PAGE_SAO stub defined as 0 for book3e/64 can be removed.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/715e644fb3c7d992c0b71f6165ab6cf8c682055a.1655706069.git.christophe.leroy@csgroup.eu
Christophe Leroy [Tue, 14 Jun 2022 10:32:25 +0000 (12:32 +0200)]
powerpc/32: Remove __map_without_ltlbs
__map_without_ltlbs is used only for 40x, and only when
STRICT_KERNEL_RWX, KFENCE or DEBUG_PAGEALLOC is active.
Do the verification directly in 40x version of mmu_mapin_ram()
and remove __map_without_ltlbs from core ppc32.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/3422094db965d218c4c3d8580f526963a9ac897f.1655202721.git.christophe.leroy@csgroup.eu
Christophe Leroy [Tue, 14 Jun 2022 10:32:24 +0000 (12:32 +0200)]
powerpc/32: Remove 'noltlbs' kernel parameter
Mapping without large TLBs has no added value on the 8xx.
Mapping without large TLBs is still necessary on 40x when
selecting CONFIG_KFENCE or CONFIG_DEBUG_PAGEALLOC or
CONFIG_STRICT_KERNEL_RWX, but this is done automatically
and doesn't require user selection.
Remove 'noltlbs' kernel parameter, the user has no reason
to use it.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/80ca17bd39cf608a8ebd0764d7064a498e131199.1655202721.git.christophe.leroy@csgroup.eu
Christophe Leroy [Tue, 14 Jun 2022 10:32:23 +0000 (12:32 +0200)]
powerpc/32: Remove the 'nobats' kernel parameter
Mapping without BATs doesn't bring any added value to the user.
Remove that option.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/6977314c823cfb728bc0273cea634b41807bfb64.1655202721.git.christophe.leroy@csgroup.eu
Christophe Leroy [Sat, 11 Jun 2022 06:51:57 +0000 (08:51 +0200)]
powerpc: Restore CONFIG_DEBUG_INFO in defconfigs
Commit
f9b3cd245784 ("Kconfig.debug: make DEBUG_INFO selectable from a
choice") broke the selection of CONFIG_DEBUG_INFO by powerpc defconfigs.
It is now necessary to select one of the three DEBUG_INFO_DWARF*
options to get DEBUG_INFO enabled.
Replace DEBUG_INFO=y by DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT=y in all
defconfigs using the following command:
sed -i s/DEBUG_INFO=y/DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT=y/g `git grep -l DEBUG_INFO arch/powerpc/configs/`
Fixes:
f9b3cd245784 ("Kconfig.debug: make DEBUG_INFO selectable from a choice")
Cc: stable@vger.kernel.org
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Reviewed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/98a4c2603bf9e4b776e219f5b8541d23aa24e854.1654930308.git.christophe.leroy@csgroup.eu
Christophe Leroy [Thu, 9 Jun 2022 10:16:42 +0000 (12:16 +0200)]
powerpc/irq: Simplify __do_irq()
Remove duplicated code by implementing a proper if/else.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/5a3b21311191f1240850db6ab29b19ac7885fe03.1654769775.git.christophe.leroy@csgroup.eu
Christophe Leroy [Thu, 9 Jun 2022 10:16:41 +0000 (12:16 +0200)]
powerpc/irq: Perform stack_overflow detection after switching to IRQ stack
When KASAN is enabled, as shown by the Oops below, the 2k limit is not
enough to allow stack dump after a stack overflow detection when
CONFIG_DEBUG_STACKOVERFLOW is selected:
do_IRQ: stack overflow: 1984
CPU: 0 PID: 126 Comm: systemd-udevd Not tainted 5.18.0-gentoo-PMacG4 #1
Call Trace:
Oops: Kernel stack overflow, sig: 11 [#1]
BE PAGE_SIZE=4K MMU=Hash SMP NR_CPUS=2 PowerMac
Modules linked in: sr_mod cdrom radeon(+) ohci_pci(+) hwmon i2c_algo_bit drm_ttm_helper ttm drm_dp_helper snd_aoa_i2sbus snd_aoa_soundbus snd_pcm ehci_pci snd_timer ohci_hcd snd ssb ehci_hcd 8250_pci soundcore drm_kms_helper pcmcia 8250 pcmcia_core syscopyarea usbcore sysfillrect 8250_base sysimgblt serial_mctrl_gpio fb_sys_fops usb_common pkcs8_key_parser fuse drm drm_panel_orientation_quirks configfs
CPU: 0 PID: 126 Comm: systemd-udevd Not tainted 5.18.0-gentoo-PMacG4 #1
NIP:
c02e5558 LR:
c07eb3bc CTR:
c07f46a8
REGS:
e7fe9f50 TRAP: 0000 Not tainted (5.18.0-gentoo-PMacG4)
MSR:
00001032 <ME,IR,DR,RI> CR:
44a14824 XER:
20000000
GPR00:
c07eb3bc eaa1c000 c26baea0 eaa1c0a0 00000008 00000000 c07eb3bc eaa1c010
GPR08:
eaa1c0a8 04f3f3f3 f1f1f1f1 c07f4c84 44a14824 0080f7e4 00000005 00000010
GPR16:
00000025 eaa1c154 eaa1c158 c0dbad64 00000020 fd543810 eaa1c0a0 eaa1c29e
GPR24:
c0dbad44 c0db8740 05ffffff fd543802 eaa1c150 c0c9a3c0 eaa1c0a0 c0c9a3c0
NIP [
c02e5558] kasan_check_range+0xc/0x2b4
LR [
c07eb3bc] format_decode+0x80/0x604
Call Trace:
[
eaa1c000] [
c07eb3bc] format_decode+0x80/0x604 (unreliable)
[
eaa1c070] [
c07f4dac] vsnprintf+0x128/0x938
[
eaa1c110] [
c07f5788] sprintf+0xa0/0xc0
[
eaa1c180] [
c0154c1c] __sprint_symbol.constprop.0+0x170/0x198
[
eaa1c230] [
c07ee71c] symbol_string+0xf8/0x260
[
eaa1c430] [
c07f46d0] pointer+0x15c/0x710
[
eaa1c4b0] [
c07f4fbc] vsnprintf+0x338/0x938
[
eaa1c550] [
c00e8fa0] vprintk_store+0x2a8/0x678
[
eaa1c690] [
c00e94e4] vprintk_emit+0x174/0x378
[
eaa1c6d0] [
c00ea008] _printk+0x9c/0xc0
[
eaa1c750] [
c000ca94] show_stack+0x21c/0x260
[
eaa1c7a0] [
c07d0bd4] dump_stack_lvl+0x60/0x90
[
eaa1c7c0] [
c0009234] __do_IRQ+0x170/0x174
[
eaa1c800] [
c0009258] do_IRQ+0x20/0x34
[
eaa1c820] [
c00045b4] HardwareInterrupt_virt+0x108/0x10c
...
As the detection is asynchronously performed at IRQs, we could be long
after the limit has been crossed, so increasing the limit would not
solve the problem completely.
In order to be sure that there is enough stack space for the stack
dump, do it after the switch to the IRQ stack. That way it is sure
that the stack is large enough, unless the IRQ stack has been
overfilled in which case the end of life is close.
Reported-by: Erhard Furtner <erhard_f@mailbox.org>
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/c215d714329f475b431a6193369035aadfc0d182.1654769775.git.christophe.leroy@csgroup.eu
Christophe Leroy [Thu, 9 Jun 2022 10:16:40 +0000 (12:16 +0200)]
powerpc/irq: Make __do_irq() static
Since commit
48cf12d88969 ("powerpc/irq: Inline call_do_irq() and
call_do_softirq()"), __do_irq() is not used outside irq.c
Reorder functions and make __do_irq() static and
drop the declaration in irq.h.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/adbe1c8315ec2d63259f41468e82e51677bb1eda.1654769775.git.christophe.leroy@csgroup.eu
Christophe Leroy [Fri, 3 Jun 2022 13:08:14 +0000 (15:08 +0200)]
powerpc/irq: Increase stack_overflow detection limit when KASAN is enabled
When KASAN is enabled, as shown by the Oops below, the 2k limit is not
enough to allow stack dump after a stack overflow detection when
CONFIG_DEBUG_STACKOVERFLOW is selected:
do_IRQ: stack overflow: 1984
CPU: 0 PID: 126 Comm: systemd-udevd Not tainted 5.18.0-gentoo-PMacG4 #1
Call Trace:
Oops: Kernel stack overflow, sig: 11 [#1]
BE PAGE_SIZE=4K MMU=Hash SMP NR_CPUS=2 PowerMac
Modules linked in: sr_mod cdrom radeon(+) ohci_pci(+) hwmon i2c_algo_bit drm_ttm_helper ttm drm_dp_helper snd_aoa_i2sbus snd_aoa_soundbus snd_pcm ehci_pci snd_timer ohci_hcd snd ssb ehci_hcd 8250_pci soundcore drm_kms_helper pcmcia 8250 pcmcia_core syscopyarea usbcore sysfillrect 8250_base sysimgblt serial_mctrl_gpio fb_sys_fops usb_common pkcs8_key_parser fuse drm drm_panel_orientation_quirks configfs
CPU: 0 PID: 126 Comm: systemd-udevd Not tainted 5.18.0-gentoo-PMacG4 #1
NIP:
c02e5558 LR:
c07eb3bc CTR:
c07f46a8
REGS:
e7fe9f50 TRAP: 0000 Not tainted (5.18.0-gentoo-PMacG4)
MSR:
00001032 <ME,IR,DR,RI> CR:
44a14824 XER:
20000000
GPR00:
c07eb3bc eaa1c000 c26baea0 eaa1c0a0 00000008 00000000 c07eb3bc eaa1c010
GPR08:
eaa1c0a8 04f3f3f3 f1f1f1f1 c07f4c84 44a14824 0080f7e4 00000005 00000010
GPR16:
00000025 eaa1c154 eaa1c158 c0dbad64 00000020 fd543810 eaa1c0a0 eaa1c29e
GPR24:
c0dbad44 c0db8740 05ffffff fd543802 eaa1c150 c0c9a3c0 eaa1c0a0 c0c9a3c0
NIP [
c02e5558] kasan_check_range+0xc/0x2b4
LR [
c07eb3bc] format_decode+0x80/0x604
Call Trace:
[
eaa1c000] [
c07eb3bc] format_decode+0x80/0x604 (unreliable)
[
eaa1c070] [
c07f4dac] vsnprintf+0x128/0x938
[
eaa1c110] [
c07f5788] sprintf+0xa0/0xc0
[
eaa1c180] [
c0154c1c] __sprint_symbol.constprop.0+0x170/0x198
[
eaa1c230] [
c07ee71c] symbol_string+0xf8/0x260
[
eaa1c430] [
c07f46d0] pointer+0x15c/0x710
[
eaa1c4b0] [
c07f4fbc] vsnprintf+0x338/0x938
[
eaa1c550] [
c00e8fa0] vprintk_store+0x2a8/0x678
[
eaa1c690] [
c00e94e4] vprintk_emit+0x174/0x378
[
eaa1c6d0] [
c00ea008] _printk+0x9c/0xc0
[
eaa1c750] [
c000ca94] show_stack+0x21c/0x260
[
eaa1c7a0] [
c07d0bd4] dump_stack_lvl+0x60/0x90
[
eaa1c7c0] [
c0009234] __do_IRQ+0x170/0x174
[
eaa1c800] [
c0009258] do_IRQ+0x20/0x34
[
eaa1c820] [
c00045b4] HardwareInterrupt_virt+0x108/0x10c
...
An investigation shows that on PPC32, calling dump_stack() requires
more than 1k when KASAN is not selected and a bit more than 2k bytes
when KASAN is selected.
On PPC64 the registers are twice the size of PPC32 registers, so the
need should be approximately twice the need on PPC32.
In the meantime we have THREAD_SIZE which is twice larger on PPC64
than PPC32, and twice larger when KASAN is selected.
So we can easily use the value of THREAD_SIZE to set the limit.
On PPC32, THREAD_SIZE is 8k without KASAN and 16k with KASAN.
On PPC64, THREAD_SIZE is 16k without KASAN.
To be on the safe side, leave 2k on PPC32 without KASAN, 4k with
KASAN, and 4k on PPC64 without KASAN. It means the limit should be
one fourth of THREAD_SIZE.
Reported-by: Erhard Furtner <erhard_f@mailbox.org>
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/e8b4eb82a126c3c6c352173a544fe94609ff660b.1654261687.git.christophe.leroy@csgroup.eu
Christophe Leroy [Wed, 18 May 2022 08:48:55 +0000 (10:48 +0200)]
powerpc/irq: remove inline assembly in hard_irq_disable macro
Use WRITE_ONCE() instead of opencoding the saving of current
stack pointeur.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/9f05937d8722ddd2064a7c2362d8f9000e15e1ba.1652863723.git.christophe.leroy@csgroup.eu
Christophe Leroy [Wed, 18 May 2022 08:32:28 +0000 (10:32 +0200)]
powerpc/irq: Replace #ifdefs by IS_ENABLED()
Replace
#ifdef CONFIG_PPC_IRQ_SOFT_MASK_DEBUG and
#ifdef CONFIG_PERF_EVENTS
by IS_ENABLED() in hw_irq.h and plpar_wrappers.h
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/c1ded642f8d9002767f8fed48ed6d1e76254ed73.1652862729.git.christophe.leroy@csgroup.eu
Christophe Leroy [Wed, 18 May 2022 08:32:27 +0000 (10:32 +0200)]
powerpc/irq: Don't open code irq_soft_mask helpers
Use READ_ONCE() and WRITE_ONCE() instead of open coding
read and write of local PACA irq_soft_mask.
For the write, add a barrier to keep the memory clobber
that was there previously.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/e2454434992cc932a5a34b695ae981c0b2f4c28e.1652862729.git.christophe.leroy@csgroup.eu
Christophe Leroy [Wed, 18 May 2022 07:40:16 +0000 (09:40 +0200)]
powerpc/irq64: Remove get_irq_happened()
No need to open code the read of local_paca->irq_happened in
assembly, we have READ_ONCE() for doing the same.
Replace get_irq_happened() by READ_ONCE(local_paca->irq_happened).
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/af511b53e4eb51f8fbc51eda7f5597175e68dce6.1652859593.git.christophe.leroy@csgroup.eu