platform/kernel/linux-starfive.git
2 years agoMerge branch 'for-next/spectre-bhb' into for-next/core
Will Deacon [Mon, 14 Mar 2022 19:05:13 +0000 (19:05 +0000)]
Merge branch 'for-next/spectre-bhb' into for-next/core

Merge in the latest Spectre mess to fix up conflicts with what was
already queued for 5.18 when the embargo finally lifted.

* for-next/spectre-bhb: (21 commits)
  arm64: Do not include __READ_ONCE() block in assembly files
  arm64: proton-pack: Include unprivileged eBPF status in Spectre v2 mitigation reporting
  arm64: Use the clearbhb instruction in mitigations
  KVM: arm64: Allow SMCCC_ARCH_WORKAROUND_3 to be discovered and migrated
  arm64: Mitigate spectre style branch history side channels
  arm64: proton-pack: Report Spectre-BHB vulnerabilities as part of Spectre-v2
  arm64: Add percpu vectors for EL1
  arm64: entry: Add macro for reading symbol addresses from the trampoline
  arm64: entry: Add vectors that have the bhb mitigation sequences
  arm64: entry: Add non-kpti __bp_harden_el1_vectors for mitigations
  arm64: entry: Allow the trampoline text to occupy multiple pages
  arm64: entry: Make the kpti trampoline's kpti sequence optional
  arm64: entry: Move trampoline macros out of ifdef'd section
  arm64: entry: Don't assume tramp_vectors is the start of the vectors
  arm64: entry: Allow tramp_alias to access symbols after the 4K boundary
  arm64: entry: Move the trampoline data page before the text page
  arm64: entry: Free up another register on kpti's tramp_exit path
  arm64: entry: Make the trampoline cleanup optional
  KVM: arm64: Allow indirect vectors to be used without SPECTRE_V3A
  arm64: spectre: Rename spectre_v4_patch_fw_mitigation_conduit
  ...

2 years agoMerge branch 'for-next/fpsimd' into for-next/core
Will Deacon [Mon, 14 Mar 2022 19:04:22 +0000 (19:04 +0000)]
Merge branch 'for-next/fpsimd' into for-next/core

* for-next/fpsimd:
  arm64: cpufeature: Warn if we attempt to read a zero width field
  arm64: cpufeature: Add missing .field_width for GIC system registers
  arm64: signal: nofpsimd: Do not allocate fp/simd context when not available
  arm64: cpufeature: Always specify and use a field width for capabilities
  arm64: Always use individual bits in CPACR floating point enables
  arm64: Define CPACR_EL1_FPEN similarly to other floating point controls

2 years agoMerge branch 'for-next/strings' into for-next/core
Will Deacon [Mon, 14 Mar 2022 19:02:52 +0000 (19:02 +0000)]
Merge branch 'for-next/strings' into for-next/core

* for-next/strings:
  Revert "arm64: Mitigate MTE issues with str{n}cmp()"
  arm64: lib: Import latest version of Arm Optimized Routines' strncmp
  arm64: lib: Import latest version of Arm Optimized Routines' strcmp

2 years agoMerge branch 'for-next/rng' into for-next/core
Will Deacon [Mon, 14 Mar 2022 19:01:52 +0000 (19:01 +0000)]
Merge branch 'for-next/rng' into for-next/core

* for-next/rng:
  arm64: random: implement arch_get_random_int/_long based on RNDR

2 years agoMerge branch 'for-next/perf' into for-next/core
Will Deacon [Mon, 14 Mar 2022 19:01:37 +0000 (19:01 +0000)]
Merge branch 'for-next/perf' into for-next/core

* for-next/perf: (25 commits)
  perf/marvell: Fix !CONFIG_OF build for CN10K DDR PMU driver
  drivers/perf: Add Apple icestorm/firestorm CPU PMU driver
  drivers/perf: arm_pmu: Handle 47 bit counters
  arm64: perf: Consistently make all event numbers as 16-bits
  arm64: perf: Expose some Armv9 common events under sysfs
  perf/marvell: cn10k DDR perf event core ownership
  perf/marvell: cn10k DDR perfmon event overflow handling
  perf/marvell: CN10k DDR performance monitor support
  dt-bindings: perf: marvell: cn10k ddr performance monitor
  perf/arm-cmn: Update watchpoint format
  perf/arm-cmn: Hide XP PUB events for CMN-600
  perf: replace bitmap_weight with bitmap_empty where appropriate
  perf: Replace acpi_bus_get_device()
  perf/marvell_cn10k: Fix unused variable warning when W=1 and CONFIG_OF=n
  perf/arm-cmn: Make arm_cmn_debugfs static
  perf: MARVELL_CN10K_TAD_PMU should depend on ARCH_THUNDER
  perf/arm-ccn: Use platform_get_irq() to get the interrupt
  irqchip/apple-aic: Move PMU-specific registers to their own include file
  arm64: dts: apple: Add t8303 PMU nodes
  arm64: dts: apple: Add t8103 PMU interrupt affinities
  ...

2 years agoMerge branch 'for-next/pauth' into for-next/core
Will Deacon [Mon, 14 Mar 2022 19:01:32 +0000 (19:01 +0000)]
Merge branch 'for-next/pauth' into for-next/core

* for-next/pauth:
  arm64: Add support of PAuth QARMA3 architected algorithm
  arm64: cpufeature: Mark existing PAuth architected algorithm as QARMA5
  arm64: cpufeature: Account min_field_value when cheking secondaries for PAuth

2 years agoMerge branch 'for-next/mte' into for-next/core
Will Deacon [Mon, 14 Mar 2022 19:01:23 +0000 (19:01 +0000)]
Merge branch 'for-next/mte' into for-next/core

* for-next/mte:
  docs: sysfs-devices-system-cpu: document "asymm" value for mte_tcf_preferred
  arm64/mte: Remove asymmetric mode from the prctl() interface
  kasan: fix a missing header include of static_keys.h
  arm64/mte: Add userspace interface for enabling asymmetric mode
  arm64/mte: Add hwcap for asymmetric mode
  arm64/mte: Add a little bit of documentation for mte_update_sctlr_user()
  arm64/mte: Document ABI for asymmetric mode
  arm64: mte: avoid clearing PSTATE.TCO on entry unless necessary
  kasan: split kasan_*enabled() functions into a separate header

2 years agoMerge branch 'for-next/mm' into for-next/core
Will Deacon [Mon, 14 Mar 2022 19:01:18 +0000 (19:01 +0000)]
Merge branch 'for-next/mm' into for-next/core

* for-next/mm:
  Documentation: vmcoreinfo: Fix htmldocs warning
  arm64/mm: Drop use_1G_block()
  arm64: avoid flushing icache multiple times on contiguous HugeTLB
  arm64: crash_core: Export MODULES, VMALLOC, and VMEMMAP ranges
  arm64/hugetlb: Define __hugetlb_valid_size()
  arm64/mm: avoid fixmap race condition when create pud mapping
  arm64/mm: Consolidate TCR_EL1 fields

2 years agoMerge branch 'for-next/misc' into for-next/core
Will Deacon [Mon, 14 Mar 2022 19:01:12 +0000 (19:01 +0000)]
Merge branch 'for-next/misc' into for-next/core

* for-next/misc:
  arm64: mm: Drop 'const' from conditional arm64_dma_phys_limit definition
  arm64: clean up tools Makefile
  arm64: drop unused includes of <linux/personality.h>
  arm64: Do not defer reserve_crashkernel() for platforms with no DMA memory zones
  arm64: prevent instrumentation of bp hardening callbacks
  arm64: cpufeature: Remove cpu_has_fwb() check
  arm64: atomics: remove redundant static branch
  arm64: entry: Save some nops when CONFIG_ARM64_PSEUDO_NMI is not set

2 years agoMerge branch 'for-next/linkage' into for-next/core
Will Deacon [Mon, 14 Mar 2022 19:01:05 +0000 (19:01 +0000)]
Merge branch 'for-next/linkage' into for-next/core

* for-next/linkage:
  arm64: module: remove (NOLOAD) from linker script
  linkage: remove SYM_FUNC_{START,END}_ALIAS()
  x86: clean up symbol aliasing
  arm64: clean up symbol aliasing
  linkage: add SYM_FUNC_ALIAS{,_LOCAL,_WEAK}()

2 years agoMerge branch 'for-next/kselftest' into for-next/core
Will Deacon [Mon, 14 Mar 2022 19:00:58 +0000 (19:00 +0000)]
Merge branch 'for-next/kselftest' into for-next/core

* for-next/kselftest:
  kselftest/arm64: Log the PIDs of the parent and child in sve-ptrace
  kselftest/arm64: signal: Allow tests to be incompatible with features
  kselftest/arm64: mte: user_mem: test a wider range of values
  kselftest/arm64: mte: user_mem: add more test types
  kselftest/arm64: mte: user_mem: add test type enum
  kselftest/arm64: mte: user_mem: check different offsets and sizes
  kselftest/arm64: mte: user_mem: rework error handling
  kselftest/arm64: mte: user_mem: introduce tag_offset and tag_len
  kselftest/arm64: Remove local definitions of MTE prctls
  kselftest/arm64: Remove local ARRAY_SIZE() definitions

2 years agoMerge branch 'for-next/insn' into for-next/core
Will Deacon [Mon, 14 Mar 2022 19:00:49 +0000 (19:00 +0000)]
Merge branch 'for-next/insn' into for-next/core

* for-next/insn:
  arm64: insn: add encoders for atomic operations
  arm64: move AARCH64_BREAK_FAULT into insn-def.h
  arm64: insn: Generate 64 bit mask immediates correctly

2 years agoMerge branch 'for-next/errata' into for-next/core
Will Deacon [Mon, 14 Mar 2022 19:00:44 +0000 (19:00 +0000)]
Merge branch 'for-next/errata' into for-next/core

* for-next/errata:
  arm64: Add cavium_erratum_23154_cpus missing sentinel
  irqchip/gic-v3: Workaround Marvell erratum 38545 when reading IAR

2 years agoMerge branch 'for-next/docs' into for-next/core
Will Deacon [Mon, 14 Mar 2022 19:00:37 +0000 (19:00 +0000)]
Merge branch 'for-next/docs' into for-next/core

* for-next/docs:
  arm64/mte: Clarify mode reported by PR_GET_TAGGED_ADDR_CTRL
  arm64: booting.rst: Clarify on requiring non-secure EL2

2 years agoMerge branch 'for-next/coredump' into for-next/core
Will Deacon [Mon, 14 Mar 2022 18:58:46 +0000 (18:58 +0000)]
Merge branch 'for-next/coredump' into for-next/core

* for-next/coredump:
  arm64: Change elfcore for_each_mte_vma() to use VMA iterator
  arm64: mte: Document the core dump file format
  arm64: mte: Dump the MTE tags in the core file
  arm64: mte: Define the number of bytes for storing the tags in a page
  elf: Introduce the ARM MTE ELF segment type
  elfcore: Replace CONFIG_{IA64, UML} checks with a new option

2 years agodocs: sysfs-devices-system-cpu: document "asymm" value for mte_tcf_preferred
Evgenii Stepanov [Wed, 9 Mar 2022 21:59:43 +0000 (13:59 -0800)]
docs: sysfs-devices-system-cpu: document "asymm" value for mte_tcf_preferred

It was added in commit 766121ba5de3 ("arm64/mte: Add userspace interface
for enabling asymmetric mode").

Signed-off-by: Evgenii Stepanov <eugenis@google.com>
Reviewed-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20220309215943.87831-1-eugenis@google.com
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64: Do not include __READ_ONCE() block in assembly files
Nathan Chancellor [Wed, 9 Mar 2022 19:16:34 +0000 (12:16 -0700)]
arm64: Do not include __READ_ONCE() block in assembly files

When building arm64 defconfig + CONFIG_LTO_CLANG_{FULL,THIN}=y after
commit 558c303c9734 ("arm64: Mitigate spectre style branch history side
channels"), the following error occurs:

  <instantiation>:4:2: error: invalid fixup for movz/movk instruction
   mov w0, #ARM_SMCCC_ARCH_WORKAROUND_3
   ^

Marc figured out that moving "#include <linux/init.h>" in
include/linux/arm-smccc.h into a !__ASSEMBLY__ block resolves it. The
full include chain with CONFIG_LTO=y from include/linux/arm-smccc.h:

include/linux/init.h
include/linux/compiler.h
arch/arm64/include/asm/rwonce.h
arch/arm64/include/asm/alternative-macros.h
arch/arm64/include/asm/assembler.h

The asm/alternative-macros.h include in asm/rwonce.h only happens when
CONFIG_LTO is set, which ultimately casues asm/assembler.h to be
included before the definition of ARM_SMCCC_ARCH_WORKAROUND_3. As a
result, the preprocessor does not expand ARM_SMCCC_ARCH_WORKAROUND_3 in
__mitigate_spectre_bhb_fw, which results in the error above.

Avoid this problem by just avoiding the CONFIG_LTO=y __READ_ONCE() block
in asm/rwonce.h with assembly files, as nothing in that block is useful
to assembly files, which allows ARM_SMCCC_ARCH_WORKAROUND_3 to be
properly expanded with CONFIG_LTO=y builds.

Fixes: e35123d83ee3 ("arm64: lto: Strengthen READ_ONCE() to acquire when CONFIG_LTO=y")
Cc: <stable@vger.kernel.org> # 5.11.x
Link: https://lore.kernel.org/r/20220309155716.3988480-1-maz@kernel.org/
Reported-by: Marc Zyngier <maz@kernel.org>
Acked-by: James Morse <james.morse@arm.com>
Signed-off-by: Nathan Chancellor <nathan@kernel.org>
Link: https://lore.kernel.org/r/20220309191633.2307110-1-nathan@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2 years agoarm64/mte: Remove asymmetric mode from the prctl() interface
Mark Brown [Wed, 9 Mar 2022 13:12:00 +0000 (13:12 +0000)]
arm64/mte: Remove asymmetric mode from the prctl() interface

As pointed out by Evgenii Stepanov one potential issue with the new ABI for
enabling asymmetric is that if there are multiple places where MTE is
configured in a process, some of which were compiled with the old prctl.h
and some of which were compiled with the new prctl.h, there may be problems
keeping track of which MTE modes are requested. For example some code may
disable only sync and async modes leaving asymmetric mode enabled when it
intended to fully disable MTE.

In order to avoid such mishaps remove asymmetric mode from the prctl(),
instead implicitly allowing it if both sync and async modes are requested.
This should not disrupt userspace since a process requesting both may
already see a mix of sync and async modes due to differing defaults between
CPUs or changes in default while the process is running but it does mean
that userspace is unable to explicitly request asymmetric mode without
changing the system default for CPUs.

Reported-by: Evgenii Stepanov <eugenis@google.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Evgenii Stepanov <eugenis@google.com>
Cc: Peter Collingbourne <pcc@google.com>
Cc: Joey Gouly <joey.gouly@arm.com>
Cc: Branislav Rankov <branislav.rankov@arm.com>
Link: https://lore.kernel.org/r/20220309131200.112637-1-broonie@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64: Add cavium_erratum_23154_cpus missing sentinel
Marc Zyngier [Wed, 9 Mar 2022 18:06:00 +0000 (18:06 +0000)]
arm64: Add cavium_erratum_23154_cpus missing sentinel

Qian Cai reported that playing with CPU hotplug resulted in a
out-of-bound access due to cavium_erratum_23154_cpus missing
a sentinel indicating the end of the array.

Add it in order to restore peace and harmony in the world
of broken HW.

Reported-by: Qian Cai <quic_qiancai@quicinc.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Fixes: 24a147bcef8c ("irqchip/gic-v3: Workaround Marvell erratum 38545 when reading IAR")
Link: https://lore.kernel.org/r/YijmkXp1VG7e8lDx@qian
Cc: Linu Cherian <lcherian@marvell.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20220309180600.3990874-1-maz@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoperf/marvell: Fix !CONFIG_OF build for CN10K DDR PMU driver
Will Deacon [Wed, 9 Mar 2022 12:31:00 +0000 (12:31 +0000)]
perf/marvell: Fix !CONFIG_OF build for CN10K DDR PMU driver

When compiling the Marvell CN10K DDR PMU driver with CONFIG_OF=n, the
build fails:

  | drivers/perf/marvell_cn10k_ddr_pmu.c:723:35: error: 'cn10k_ddr_pmu_of_match' undeclared here (not in a function); did you mean 'cn10k_ddr_pmu_driver'?

Use `of_match_ptr()` to avoid referencing the non-existent match table
in this configuration.

Link: https://lore.kernel.org/r/202203091424.Vfe8J4W9-lkp@intel.com
Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64: mm: Drop 'const' from conditional arm64_dma_phys_limit definition
Will Deacon [Wed, 9 Mar 2022 12:21:37 +0000 (12:21 +0000)]
arm64: mm: Drop 'const' from conditional arm64_dma_phys_limit definition

Commit 031495635b46 ("arm64: Do not defer reserve_crashkernel() for
platforms with no DMA memory zones") introduced different definitions
for 'arm64_dma_phys_limit' depending on CONFIG_ZONE_DMA{,32} based on
a late suggestion from Pasha. Sadly, this results in a build error when
passing W=1:

  | arch/arm64/mm/init.c:90:19: error: conflicting type qualifiers for 'arm64_dma_phys_limit'

Drop the 'const' for now and use '__ro_after_init' consistently.

Link: https://lore.kernel.org/r/202203090241.aj7paWeX-lkp@intel.com
Link: https://lore.kernel.org/r/CA+CK2bDbbx=8R=UthkMesWOST8eJMtOGJdfMRTFSwVmo0Vn0EA@mail.gmail.com
Fixes: 031495635b46 ("arm64: Do not defer reserve_crashkernel() for platforms with no DMA memory zones")
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoDocumentation: vmcoreinfo: Fix htmldocs warning
Will Deacon [Wed, 9 Mar 2022 12:16:33 +0000 (12:16 +0000)]
Documentation: vmcoreinfo: Fix htmldocs warning

Since commit 2369f171d5c5 ("arm64: crash_core: Export MODULES, VMALLOC,
and VMEMMAP ranges"), Stephen reports a warning when building htmldocs:

  | Documentation/admin-guide/kdump/vmcoreinfo.rst:498: WARNING: Title underline too short.

Extend the underline to squash the warning.

Fixes: 2369f171d5c5 ("arm64: crash_core: Export MODULES, VMALLOC, and VMEMMAP ranges")
Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Will Deacon <will@kernel.org>
2 years agokasan: fix a missing header include of static_keys.h
Joey Gouly [Tue, 1 Mar 2022 15:45:18 +0000 (15:45 +0000)]
kasan: fix a missing header include of static_keys.h

The kasan-enabled.h header relies on static keys, so make sure
to include the header to avoid compilation errors (with JUMP_LABEL=n).

It fixes the following:
./include/linux/kasan-enabled.h:9:1: warning: data definition has no type or storage class
    9 | DECLARE_STATIC_KEY_FALSE(kasan_flag_enabled);
      | ^~~~~~~~~~~~~~~~~~~~~~~~
error: type defaults to 'int' in declaration of 'DECLARE_STATIC_KEY_FALSE' [-Werror=implicit-int]

Fixes: f9b5e46f4097eb29 ("kasan: split kasan_*enabled() functions into a separate header")
Cc: Peter Collingbourne <pcc@google.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Acked-by: Andrey Konovalov <andreyknvl@gmail.com>
Signed-off-by: Joey Gouly <joey.gouly@arm.com>
Link: https://lore.kernel.org/r/20220301154518.19456-1-joey.gouly@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoMerge branch 'for-next/perf-m1' into for-next/perf
Will Deacon [Tue, 8 Mar 2022 13:33:34 +0000 (13:33 +0000)]
Merge branch 'for-next/perf-m1' into for-next/perf

Support for the CPU PMUs on the Apple M1.

* for-next/perf-m1:
  drivers/perf: Add Apple icestorm/firestorm CPU PMU driver
  drivers/perf: arm_pmu: Handle 47 bit counters
  irqchip/apple-aic: Move PMU-specific registers to their own include file
  arm64: dts: apple: Add t8303 PMU nodes
  arm64: dts: apple: Add t8103 PMU interrupt affinities
  irqchip/apple-aic: Wire PMU interrupts
  irqchip/apple-aic: Parse FIQ affinities from device-tree
  dt-bindings: apple,aic: Add affinity description for per-cpu pseudo-interrupts
  dt-bindings: apple,aic: Add CPU PMU per-cpu pseudo-interrupts
  dt-bindings: arm-pmu: Document Apple PMU compatible strings

2 years agodrivers/perf: Add Apple icestorm/firestorm CPU PMU driver
Marc Zyngier [Tue, 8 Feb 2022 18:56:04 +0000 (18:56 +0000)]
drivers/perf: Add Apple icestorm/firestorm CPU PMU driver

Add a new, weird and wonderful driver for the equally weird Apple
PMU HW. Although the PMU itself is functional, we don't know much
about the events yet, so this can be considered as yet another
random number generator...

Nonetheless, it can reliably count at least cycles and instructions
in the usually wonky big-little way. For anything else, it of course
supports raw event numbers.

Signed-off-by: Marc Zyngier <maz@kernel.org>
Signed-off-by: Will Deacon <will@kernel.org>
2 years agodrivers/perf: arm_pmu: Handle 47 bit counters
Marc Zyngier [Tue, 8 Feb 2022 18:56:03 +0000 (18:56 +0000)]
drivers/perf: arm_pmu: Handle 47 bit counters

The current ARM PMU framework can only deal with 32 or 64bit counters.
Teach it about a 47bit flavour.

Yes, this is odd.

Reviewed-by: Hector Martin <marcan@marcan.st>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoMerge branch 'irq/aic-pmu' of git://git.kernel.org/pub/scm/linux/kernel/git/maz/arm...
Will Deacon [Tue, 8 Mar 2022 13:32:28 +0000 (13:32 +0000)]
Merge branch 'irq/aic-pmu' of git://git./linux/kernel/git/maz/arm-platforms into for-next/perf-m1

Pull in Apple AIC rework from Marc Zyngier to support PMU interrupts on
the M1 platform.

* 'irq/aic-pmu' of git://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms:
  irqchip/apple-aic: Move PMU-specific registers to their own include file
  arm64: dts: apple: Add t8303 PMU nodes
  arm64: dts: apple: Add t8103 PMU interrupt affinities
  irqchip/apple-aic: Wire PMU interrupts
  irqchip/apple-aic: Parse FIQ affinities from device-tree
  dt-bindings: apple,aic: Add affinity description for per-cpu pseudo-interrupts
  dt-bindings: apple,aic: Add CPU PMU per-cpu pseudo-interrupts
  dt-bindings: arm-pmu: Document Apple PMU compatible strings

2 years agoarm64: perf: Consistently make all event numbers as 16-bits
Shaokun Zhang [Thu, 3 Mar 2022 10:07:10 +0000 (18:07 +0800)]
arm64: perf: Consistently make all event numbers as 16-bits

Arm ARM documents PMU event numbers as 16-bits in the table and more 0x4XXX
events have been added in the header file, so use 16-bits for all event
numbers and make them consistent.

No functional change intended.

Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will@kernel.org>
Signed-off-by: Shaokun Zhang <zhangshaokun@hisilicon.com>
Link: https://lore.kernel.org/r/20220303100710.2238-1-zhangshaokun@hisilicon.com
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64: perf: Expose some Armv9 common events under sysfs
Shaokun Zhang [Thu, 3 Mar 2022 08:54:19 +0000 (16:54 +0800)]
arm64: perf: Expose some Armv9 common events under sysfs

Armv9[1] has introduced some common architectural events (0x400C-0x400F)
and common microarchitectural events (0x4010-0x401B), which can be detected
by PMCEID0_EL0 from bit44 to bit59, so expose these common events under
sysfs.

[1] https://developer.arm.com/documentation/ddi0608/ba

Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will@kernel.org>
Signed-off-by: Shaokun Zhang <zhangshaokun@hisilicon.com>
Link: https://lore.kernel.org/r/20220303085419.64085-1-zhangshaokun@hisilicon.com
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoperf/marvell: cn10k DDR perf event core ownership
Bharat Bhushan [Fri, 11 Feb 2022 04:53:46 +0000 (10:23 +0530)]
perf/marvell: cn10k DDR perf event core ownership

As DDR perf event counters are not per core, so they should be accessed
only by one core at a time. Select new core when previously owning core
is going offline.

Signed-off-by: Bharat Bhushan <bbhushan2@marvell.com>
Reviewed-by: Bhaskara Budiredla <bbudiredla@marvell.com>
Link: https://lore.kernel.org/r/20220211045346.17894-5-bbhushan2@marvell.com
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoperf/marvell: cn10k DDR perfmon event overflow handling
Bharat Bhushan [Fri, 11 Feb 2022 04:53:45 +0000 (10:23 +0530)]
perf/marvell: cn10k DDR perfmon event overflow handling

CN10k DSS h/w perfmon does not support event overflow interrupt, so
periodic timer is being used. Each event counter is 48bit, which in worst
case scenario can increment at maximum 5.6 GT/s. At this rate it may take
many hours to overflow these counters. Therefore polling period for
overflow is set to 100 sec, which can be changed using sysfs parameter.

Two fixed event counters starts counting from zero on overflow, so
overflow condition is when new count less than previous count. While
eight programmable event counters freezes at maximum value. Also individual
counter cannot be restarted, so need to restart all eight counters.

Signed-off-by: Bharat Bhushan <bbhushan2@marvell.com>
Reviewed-by: Bhaskara Budiredla <bbudiredla@marvell.com>
Link: https://lore.kernel.org/r/20220211045346.17894-4-bbhushan2@marvell.com
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoperf/marvell: CN10k DDR performance monitor support
Bharat Bhushan [Fri, 11 Feb 2022 04:53:44 +0000 (10:23 +0530)]
perf/marvell: CN10k DDR performance monitor support

Marvell CN10k DRAM Subsystem (DSS) supports eight event counters for
monitoring performance and software can program each counter to monitor
any of the defined performance event. Performance events are for
interface between the DDR controller and the PHY, interface between the
DDR Controller and the CHI interconnect, or within the DDR Controller.
Additionally DSS also supports two fixed performance event counters, one
for number of ddr reads and other for ddr writes.

This patch add basic support for these performance monitoring events
on CN10k.

Signed-off-by: Bharat Bhushan <bbhushan2@marvell.com>
Reviewed-by: Bhaskara Budiredla <bbudiredla@marvell.com>
Link: https://lore.kernel.org/r/20220211045346.17894-3-bbhushan2@marvell.com
Signed-off-by: Will Deacon <will@kernel.org>
2 years agodt-bindings: perf: marvell: cn10k ddr performance monitor
Bharat Bhushan [Fri, 11 Feb 2022 04:53:43 +0000 (10:23 +0530)]
dt-bindings: perf: marvell: cn10k ddr performance monitor

Add binding documentation for the Marvell CN10k DDR
performance monitor unit.

Signed-off-by: Bharat Bhushan <bbhushan2@marvell.com>
Reviewed-by: Rob Herring <robh@kernel.org>
Link: https://lore.kernel.org/r/20220211045346.17894-2-bbhushan2@marvell.com
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64: clean up tools Makefile
Masahiro Yamada [Sun, 27 Feb 2022 08:52:32 +0000 (17:52 +0900)]
arm64: clean up tools Makefile

Remove unused gen-y.

Remove redundant $(shell ...) because 'mkdir' is done in cmd_gen_cpucaps.

Replace $(filter-out $(PHONY), $^) with the $(real-prereqs) shorthand.

The '&&' in cmd_gen_cpucaps should be replaced with ';' because it is
run under 'set -e' environment.

Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
Link: https://lore.kernel.org/r/20220227085232.206529-1-masahiroy@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoperf/arm-cmn: Update watchpoint format
Robin Murphy [Thu, 24 Feb 2022 18:41:22 +0000 (18:41 +0000)]
perf/arm-cmn: Update watchpoint format

From CMN-650 onwards, some of the fields in the watchpoint config
registers moved subtly enough to easily overlook. Watchpoint events are
still only partially supported on newer IPs - which in itself deserves
noting - but were not intended to become any *less* functional than on
CMN-600.

Fixes: 60d1504070c2 ("perf/arm-cmn: Support new IP features")
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Link: https://lore.kernel.org/r/e1ce4c2f1e4f73ab1c60c3a85e4037cd62dd6352.1645727871.git.robin.murphy@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoperf/arm-cmn: Hide XP PUB events for CMN-600
Robin Murphy [Thu, 24 Feb 2022 18:41:21 +0000 (18:41 +0000)]
perf/arm-cmn: Hide XP PUB events for CMN-600

CMN-600 doesn't have XP events for the PUB channel, but we missed
the appropriate check to avoid exposing them.

Fixes: 60d1504070c2 ("perf/arm-cmn: Support new IP features")
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Link: https://lore.kernel.org/r/4c108d39a0513def63acccf09ab52b328f242aeb.1645727871.git.robin.murphy@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64: drop unused includes of <linux/personality.h>
Sagar Patel [Mon, 7 Mar 2022 22:24:13 +0000 (17:24 -0500)]
arm64: drop unused includes of <linux/personality.h>

Drop several includes of <linux/personality.h> which are not used.
git-blame indicates they were used at some point, but they're not needed
anymore.

Signed-off-by: Sagar Patel <sagarmp@cs.unc.edu>
Link: https://lore.kernel.org/r/20220307222412.146506-1-sagarmp@cs.unc.edu
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64: Do not defer reserve_crashkernel() for platforms with no DMA memory zones
Vijay Balakrishna [Wed, 2 Mar 2022 17:38:09 +0000 (09:38 -0800)]
arm64: Do not defer reserve_crashkernel() for platforms with no DMA memory zones

The following patches resulted in deferring crash kernel reservation to
mem_init(), mainly aimed at platforms with DMA memory zones (no IOMMU),
in particular Raspberry Pi 4.

commit 1a8e1cef7603 ("arm64: use both ZONE_DMA and ZONE_DMA32")
commit 8424ecdde7df ("arm64: mm: Set ZONE_DMA size based on devicetree's dma-ranges")
commit 0a30c53573b0 ("arm64: mm: Move reserve_crashkernel() into mem_init()")
commit 2687275a5843 ("arm64: Force NO_BLOCK_MAPPINGS if crashkernel reservation is required")

Above changes introduced boot slowdown due to linear map creation for
all the memory banks with NO_BLOCK_MAPPINGS, see discussion[1].  The proposed
changes restore crash kernel reservation to earlier behavior thus avoids
slow boot, particularly for platforms with IOMMU (no DMA memory zones).

Tested changes to confirm no ~150ms boot slowdown on our SoC with IOMMU
and 8GB memory.  Also tested with ZONE_DMA and/or ZONE_DMA32 configs to confirm
no regression to deferring scheme of crash kernel memory reservation.
In both cases successfully collected kernel crash dump.

[1] https://lore.kernel.org/all/9436d033-579b-55fa-9b00-6f4b661c2dd7@linux.microsoft.com/

Signed-off-by: Vijay Balakrishna <vijayb@linux.microsoft.com>
Cc: stable@vger.kernel.org
Reviewed-by: Pasha Tatashin <pasha.tatashin@soleen.com>
Link: https://lore.kernel.org/r/1646242689-20744-1-git-send-email-vijayb@linux.microsoft.com
[will: Add #ifdef CONFIG_KEXEC_CORE guards to fix 'crashk_res' references in allnoconfig build]
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoRevert "arm64: Mitigate MTE issues with str{n}cmp()"
Joey Gouly [Tue, 1 Mar 2022 10:14:35 +0000 (10:14 +0000)]
Revert "arm64: Mitigate MTE issues with str{n}cmp()"

This reverts commit 59a68d4138086c015ab8241c3267eec5550fbd44.

Now that the str{n}cmp functions have been updated to handle MTE
properly, the workaround to use the generic functions is no longer
needed.

Signed-off-by: Joey Gouly <joey.gouly@arm.com>
Cc: Robin Murphy <robin.murphy@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20220301101435.19327-4-joey.gouly@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64: lib: Import latest version of Arm Optimized Routines' strncmp
Joey Gouly [Tue, 1 Mar 2022 10:14:34 +0000 (10:14 +0000)]
arm64: lib: Import latest version of Arm Optimized Routines' strncmp

Import the latest version of the Arm Optimized Routines strncmp function based
on the upstream code of string/aarch64/strncmp.S at commit 189dfefe37d5 from:
  https://github.com/ARM-software/optimized-routines

This latest version includes MTE support.

Note that for simplicity Arm have chosen to contribute this code to Linux under
GPLv2 rather than the original MIT OR Apache-2.0 WITH LLVM-exception license.
Arm is the sole copyright holder for this code.

Signed-off-by: Joey Gouly <joey.gouly@arm.com>
Cc: Robin Murphy <robin.murphy@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20220301101435.19327-3-joey.gouly@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64: lib: Import latest version of Arm Optimized Routines' strcmp
Joey Gouly [Tue, 1 Mar 2022 10:14:33 +0000 (10:14 +0000)]
arm64: lib: Import latest version of Arm Optimized Routines' strcmp

Import the latest version of the Arm Optimized Routines strcmp function based
on the upstream code of string/aarch64/strcmp.S at commit 189dfefe37d5 from:
  https://github.com/ARM-software/optimized-routines

This latest version includes MTE support.

Note that for simplicity Arm have chosen to contribute this code to Linux under
GPLv2 rather than the original MIT OR Apache-2.0 WITH LLVM-exception license.
Arm is the sole copyright holder for this code.

Signed-off-by: Joey Gouly <joey.gouly@arm.com>
Cc: Robin Murphy <robin.murphy@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20220301101435.19327-2-joey.gouly@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 years agokselftest/arm64: Log the PIDs of the parent and child in sve-ptrace
Mark Brown [Thu, 3 Mar 2022 19:28:17 +0000 (19:28 +0000)]
kselftest/arm64: Log the PIDs of the parent and child in sve-ptrace

If the test triggers a problem it may well result in a log message from
the kernel such as a WARN() or BUG(). If these include a PID it can help
with debugging to know if it was the parent or child process that triggered
the issue, since the test is just creating a new thread the process name
will be the same either way. Print the PIDs of the parent and child on
startup so users have this information to hand should it be needed.

Signed-off-by: Mark Brown <broonie@kernel.org>
Reviewed-by: Shuah Khan <skhan@linuxfoundation.org>
Link: https://lore.kernel.org/r/20220303192817.2732509-1-broonie@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoirqchip/gic-v3: Workaround Marvell erratum 38545 when reading IAR
Linu Cherian [Mon, 7 Mar 2022 14:30:14 +0000 (20:00 +0530)]
irqchip/gic-v3: Workaround Marvell erratum 38545 when reading IAR

When a IAR register read races with a GIC interrupt RELEASE event,
GIC-CPU interface could wrongly return a valid INTID to the CPU
for an interrupt that is already released(non activated) instead of 0x3ff.

As a side effect, an interrupt handler could run twice, once with
interrupt priority and then with idle priority.

As a workaround, gic_read_iar is updated so that it will return a
valid interrupt ID only if there is a change in the active priority list
after the IAR read on all the affected Silicons.

Since there are silicon variants where both 23154 and 38545 are applicable,
workaround for erratum 23154 has been extended to address both of them.

Signed-off-by: Linu Cherian <lcherian@marvell.com>
Reviewed-by: Marc Zyngier <maz@kernel.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20220307143014.22758-1-lcherian@marvell.com
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64/mm: Drop use_1G_block()
Anshuman Khandual [Wed, 16 Feb 2022 05:06:52 +0000 (10:36 +0530)]
arm64/mm: Drop use_1G_block()

pud_sect_supported() already checks for PUD level block mapping support i.e
on ARM64_4K_PAGES config. Hence pud_sect_supported(), along with some other
required alignment checks can help completely drop use_1G_block().

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/1644988012-25455-1-git-send-email-anshuman.khandual@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64: avoid flushing icache multiple times on contiguous HugeTLB
Muchun Song [Wed, 2 Mar 2022 08:46:23 +0000 (16:46 +0800)]
arm64: avoid flushing icache multiple times on contiguous HugeTLB

When a contiguous HugeTLB page is mapped, set_pte_at() will be called
CONT_PTES/CONT_PMDS times.  Therefore, __sync_icache_dcache() will
flush cache multiple times if the page is executable (to ensure
the I-D cache coherency).  However, the first flushing cache already
covers subsequent cache flush operations.  So only flusing cache
for the head page if it is a HugeTLB page to avoid redundant cache
flushing.  In the next patch, it is also depends on this change
since the tail vmemmap pages of HugeTLB is mapped with read-only
meanning only head page struct can be modified.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20220302084624.33340-1-songmuchun@bytedance.com
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64: cpufeature: Warn if we attempt to read a zero width field
Mark Brown [Mon, 7 Mar 2022 18:08:59 +0000 (18:08 +0000)]
arm64: cpufeature: Warn if we attempt to read a zero width field

Add a WARN_ON_ONCE() when extracting a field if no width is specified. This
should never happen outside of development since it will be triggered with
or without the feature so long as the relevant ID register is present.  If
the warning triggers hope that the field was the standard 4 bits wide and
soldier on.

Suggested-by: Marc Zyngier <maz@kernel.org>
Signed-off-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20220307180900.3045812-1-broonie@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64: cpufeature: Add missing .field_width for GIC system registers
Mark Brown [Wed, 2 Mar 2022 13:42:25 +0000 (13:42 +0000)]
arm64: cpufeature: Add missing .field_width for GIC system registers

This was missed when making specification of a field standard.

Fixes: 0a2eec83c2c23cf6 ("arm64: cpufeature: Always specify and use a field width for capabilities")
Reported-by: Qian Cai <quic_qiancai@quicinc.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20220302134225.159217-1-broonie@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64: signal: nofpsimd: Do not allocate fp/simd context when not available
David Engraf [Fri, 25 Feb 2022 10:40:08 +0000 (11:40 +0100)]
arm64: signal: nofpsimd: Do not allocate fp/simd context when not available

Commit 6d502b6ba1b2 ("arm64: signal: nofpsimd: Handle fp/simd context for
signal frames") introduced saving the fp/simd context for signal handling
only when support is available. But setup_sigframe_layout() always
reserves memory for fp/simd context. The additional memory is not touched
because preserve_fpsimd_context() is not called and thus the magic is
invalid.

This may lead to an error when parse_user_sigframe() checks the fp/simd
area and does not find a valid magic number.

Signed-off-by: David Engraf <david.engraf@sysgo.com>
Reviwed-by: Mark Brown <broonie@kernel.org>
Fixes: 6d502b6ba1b267b3 ("arm64: signal: nofpsimd: Handle fp/simd context for signal frames")
Cc: <stable@vger.kernel.org> # 5.6.x
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20220225104008.820289-1-david.engraf@sysgo.com
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64: prevent instrumentation of bp hardening callbacks
Mark Rutland [Thu, 24 Feb 2022 18:10:28 +0000 (18:10 +0000)]
arm64: prevent instrumentation of bp hardening callbacks

We may call arm64_apply_bp_hardening() early during entry (e.g. in
el0_ia()) before it is safe to run instrumented code. Unfortunately this
may result in running instrumented code in two cases:

* The hardening callbacks called by arm64_apply_bp_hardening() are not
  marked as `noinstr`, and have been observed to be instrumented when
  compiled with either GCC or LLVM.

* Since arm64_apply_bp_hardening() itself is only marked as `inline`
  rather than `__always_inline`, it is possible that the compiler
  decides to place it out-of-line, whereupon it may be instrumented.

For example, with defconfig built with clang 13.0.0,
call_hvc_arch_workaround_1() is compiled as:

| <call_hvc_arch_workaround_1>:
|        d503233f        paciasp
|        f81f0ffe        str     x30, [sp, #-16]!
|        320183e0        mov     w0, #0x80008000
|        d503201f        nop
|        d4000002        hvc     #0x0
|        f84107fe        ldr     x30, [sp], #16
|        d50323bf        autiasp
|        d65f03c0        ret

... but when CONFIG_FTRACE=y and CONFIG_KCOV=y this is compiled as:

| <call_hvc_arch_workaround_1>:
|        d503245f        bti     c
|        d503201f        nop
|        d503201f        nop
|        d503233f        paciasp
|        a9bf7bfd        stp     x29, x30, [sp, #-16]!
|        910003fd        mov     x29, sp
|        94000000        bl      0 <__sanitizer_cov_trace_pc>
|        320183e0        mov     w0, #0x80008000
|        d503201f        nop
|        d4000002        hvc     #0x0
|        a8c17bfd        ldp     x29, x30, [sp], #16
|        d50323bf        autiasp
|        d65f03c0        ret

... with a patchable function entry registered with ftrace, and a direct
call to __sanitizer_cov_trace_pc(). Neither of these are safe early
during entry sequences.

This patch avoids the unsafe instrumentation by marking
arm64_apply_bp_hardening() as `__always_inline` and by marking the
hardening functions as `noinstr`. This avoids the potential for
instrumentation, and causes clang to consistently generate the function
as with the defconfig sample.

Note: in the defconfig compilation, when CONFIG_SVE=y, x30 is spilled to
the stack without being placed in a frame record, which will result in a
missing entry if call_hvc_arch_workaround_1() is backtraced. Similar is
true of qcom_link_stack_sanitisation(), where inline asm spills the LR
to a GPR prior to corrupting it. This is not a significant issue
presently as we will only backtrace here if an exception is taken, and
in such cases we may omit entries for other reasons today.

The relevant hardening functions were introduced in commits:

  ec82b567a74fbdff ("arm64: Implement branch predictor hardening for Falkor")
  b092201e00206141 ("arm64: Add ARM_SMCCC_ARCH_WORKAROUND_1 BP hardening support")

... and these were subsequently moved in commit:

  d4647f0a2ad71110 ("arm64: Rewrite Spectre-v2 mitigation code")

The arm64_apply_bp_hardening() function was introduced in commit:

  0f15adbb2861ce6f ("arm64: Add skeleton to harden the branch predictor against aliasing attacks")

... and was subsequently moved and reworked in commit:

  6279017e807708a0 ("KVM: arm64: Move BP hardening helpers into spectre.h")

Fixes: ec82b567a74fbdff ("arm64: Implement branch predictor hardening for Falkor")
Fixes: b092201e00206141 ("arm64: Add ARM_SMCCC_ARCH_WORKAROUND_1 BP hardening support")
Fixes: d4647f0a2ad71110 ("arm64: Rewrite Spectre-v2 mitigation code")
Fixes: 0f15adbb2861ce6f ("arm64: Add skeleton to harden the branch predictor against aliasing attacks")
Fixes: 6279017e807708a0 ("KVM: arm64: Move BP hardening helpers into spectre.h")
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Mark Brown <broonie@kernel.org>
Cc: Will Deacon <will@kernel.org>
Acked-by: Marc Zyngier <maz@kernel.org>
Reviewed-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20220224181028.512873-1-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64: crash_core: Export MODULES, VMALLOC, and VMEMMAP ranges
Huang Shijie [Wed, 9 Feb 2022 09:26:42 +0000 (09:26 +0000)]
arm64: crash_core: Export MODULES, VMALLOC, and VMEMMAP ranges

The following interrelated ranges are needed by the kdump crash tool:
MODULES_VADDR ~ MODULES_END,
VMALLOC_START ~ VMALLOC_END,
VMEMMAP_START ~ VMEMMAP_END

Since these values change from time to time, it is preferable to export
them via vmcoreinfo than to change the crash's code frequently.

Signed-off-by: Huang Shijie <shijie@os.amperecomputing.com>
Link: https://lore.kernel.org/r/20220209092642.9181-1-shijie@os.amperecomputing.com
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64: proton-pack: Include unprivileged eBPF status in Spectre v2 mitigation reporting
James Morse [Thu, 3 Mar 2022 16:53:56 +0000 (16:53 +0000)]
arm64: proton-pack: Include unprivileged eBPF status in Spectre v2 mitigation reporting

The mitigations for Spectre-BHB are only applied when an exception is
taken from user-space. The mitigation status is reported via the spectre_v2
sysfs vulnerabilities file.

When unprivileged eBPF is enabled the mitigation in the exception vectors
can be avoided by an eBPF program.

When unprivileged eBPF is enabled, print a warning and report vulnerable
via the sysfs vulnerabilities file.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: James Morse <james.morse@arm.com>
2 years agoarm64/mte: Add userspace interface for enabling asymmetric mode
Mark Brown [Wed, 16 Feb 2022 17:32:24 +0000 (17:32 +0000)]
arm64/mte: Add userspace interface for enabling asymmetric mode

The architecture provides an asymmetric mode for MTE where tag mismatches
are checked asynchronously for stores but synchronously for loads. Allow
userspace processes to select this and make it available as a default mode
via the existing per-CPU sysfs interface.

Since there PR_MTE_TCF_ values are a bitmask (allowing the kernel to choose
between the multiple modes) and there are no free bits adjacent to the
existing PR_MTE_TCF_ bits the set of bits used to specify the mode becomes
disjoint. Programs using the new interface should be aware of this and
programs that do not use it will not see any change in behaviour.

When userspace requests two possible modes but the system default for the
CPU is the third mode (eg, default is synchronous but userspace requests
either asynchronous or asymmetric) the preference order is:

   ASYMM > ASYNC > SYNC

This situation is not currently possible since there are only two modes and
it is mandatory to have a system default so there could be no ambiguity and
there is no ABI change. The chosen order is basically arbitrary as we do not
have a clear metric for what is better here.

If userspace requests specifically asymmetric mode via the prctl() and the
system does not support it then we will return an error, this mirrors
how we handle the case where userspace enables MTE on a system that does
not support MTE at all and the behaviour that will be seen if running on
an older kernel that does not support userspace use of asymmetric mode.

Attempts to set asymmetric mode as the default mode will result in an error
if the system does not support it.

Signed-off-by: Mark Brown <broonie@kernel.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Vincenzo Frascino <Vincenzo.Frascino@arm.com>
Tested-by: Branislav Rankov <branislav.rankov@arm.com>
Link: https://lore.kernel.org/r/20220216173224.2342152-5-broonie@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64/mte: Add hwcap for asymmetric mode
Mark Brown [Wed, 16 Feb 2022 17:32:23 +0000 (17:32 +0000)]
arm64/mte: Add hwcap for asymmetric mode

Allow userspace to detect support for asymmetric mode by providing a hwcap
for it, using the official feature name FEAT_MTE3.

Signed-off-by: Mark Brown <broonie@kernel.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Vincenzo Frascino <Vincenzo.Frascino@arm.com>
Tested-by: Branislav Rankov <branislav.rankov@arm.com>
Link: https://lore.kernel.org/r/20220216173224.2342152-4-broonie@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64/mte: Add a little bit of documentation for mte_update_sctlr_user()
Mark Brown [Wed, 16 Feb 2022 17:32:22 +0000 (17:32 +0000)]
arm64/mte: Add a little bit of documentation for mte_update_sctlr_user()

The code isn't that obscure but it probably won't hurt to have a little
bit more documentation for anyone trying to find out where everything
actually takes effect.

Signed-off-by: Mark Brown <broonie@kernel.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Vincenzo Frascino <Vincenzo.Frascino@arm.com>
Tested-by: Branislav Rankov <branislav.rankov@arm.com>
Link: https://lore.kernel.org/r/20220216173224.2342152-3-broonie@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64/mte: Document ABI for asymmetric mode
Mark Brown [Wed, 16 Feb 2022 17:32:21 +0000 (17:32 +0000)]
arm64/mte: Document ABI for asymmetric mode

MTE3 adds a new mode which is synchronous for reads but asynchronous for
writes. Document the userspace ABI for this feature, we call the new
mode ASYMM and add a new prctl flag and mte_tcf_preferred value for it.

Signed-off-by: Mark Brown <broonie@kernel.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20220216173224.2342152-2-broonie@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
2 years agokselftest/arm64: signal: Allow tests to be incompatible with features
Mark Brown [Mon, 7 Feb 2022 15:20:34 +0000 (15:20 +0000)]
kselftest/arm64: signal: Allow tests to be incompatible with features

Some features may invalidate some tests, for example by supporting an
operation which would trap otherwise. Allow tests to list features that
they are incompatible with so we can cover the case where a signal will
be generated without disruption on systems where that won't happen.

Signed-off-by: Mark Brown <broonie@kernel.org>
Reviewed-by: Shuah Khan <skhan@linuxfoundation.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20220207152109.197566-6-broonie@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64: cpufeature: Always specify and use a field width for capabilities
Mark Brown [Mon, 7 Feb 2022 15:20:32 +0000 (15:20 +0000)]
arm64: cpufeature: Always specify and use a field width for capabilities

Since all the fields in the main ID registers are 4 bits wide we have up
until now not bothered specifying the width in the code. Since we now
wish to use this mechanism to enumerate features from the floating point
feature registers which do not follow this pattern add a width to the
table.  This means updating all the existing table entries but makes it
less likely that we run into issues in future due to implicitly assuming
a 4 bit width.

Signed-off-by: Mark Brown <broonie@kernel.org>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20220207152109.197566-4-broonie@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64: Always use individual bits in CPACR floating point enables
Mark Brown [Mon, 7 Feb 2022 15:20:31 +0000 (15:20 +0000)]
arm64: Always use individual bits in CPACR floating point enables

CPACR_EL1 has several bitfields for controlling traps for floating point
features to EL1, each of which has a separate bits for EL0 and EL1. Marc
Zyngier noted that we are not consistent in our use of defines to
manipulate these, sometimes using a define covering the whole field and
sometimes using defines for the individual bits. Make this consistent by
expanding the whole field defines where they are used (currently only in
the KVM code) and deleting them so that no further uses can be
introduced.

Suggested-by: Marc Zyngier <maz@kernel.org>
Signed-off-by: Mark Brown <broonie@kernel.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20220207152109.197566-3-broonie@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64: Define CPACR_EL1_FPEN similarly to other floating point controls
Mark Brown [Mon, 7 Feb 2022 15:20:30 +0000 (15:20 +0000)]
arm64: Define CPACR_EL1_FPEN similarly to other floating point controls

The base floating point, SVE and SME all have enable controls for EL0 and
EL1 in CPACR_EL1 which have a similar layout and function. Currently the
basic floating point enable FPEN is defined differently to the SVE control,
specified as a single define in kvm_arm.h rather than in sysreg.h. Move the
define to sysreg.h and provide separate EL0 and EL1 control bits so code
managing the different floating point enables can look consistent.

Signed-off-by: Mark Brown <broonie@kernel.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20220207152109.197566-2-broonie@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64: module: remove (NOLOAD) from linker script
Fangrui Song [Fri, 18 Feb 2022 08:12:09 +0000 (00:12 -0800)]
arm64: module: remove (NOLOAD) from linker script

On ELF, (NOLOAD) sets the section type to SHT_NOBITS[1]. It is conceptually
inappropriate for .plt and .text.* sections which are always
SHT_PROGBITS.

In GNU ld, if PLT entries are needed, .plt will be SHT_PROGBITS anyway
and (NOLOAD) will be essentially ignored. In ld.lld, since
https://reviews.llvm.org/D118840 ("[ELF] Support (TYPE=<value>) to
customize the output section type"), ld.lld will report a `section type
mismatch` error. Just remove (NOLOAD) to fix the error.

[1] https://lld.llvm.org/ELF/linker_script.html As of today, "The
section should be marked as not loadable" on
https://sourceware.org/binutils/docs/ld/Output-Section-Type.html is
outdated for ELF.

Tested-by: Nathan Chancellor <nathan@kernel.org>
Reported-by: Nathan Chancellor <nathan@kernel.org>
Signed-off-by: Fangrui Song <maskray@google.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20220218081209.354383-1-maskray@google.com
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64: cpufeature: Remove cpu_has_fwb() check
Vladimir Murzin [Thu, 24 Feb 2022 16:47:39 +0000 (16:47 +0000)]
arm64: cpufeature: Remove cpu_has_fwb() check

cpu_has_fwb() is supposed to warn user is following architectural
requirement is not valid:

LoUU, bits [29:27] - Level of Unification Uniprocessor for the cache
                     hierarchy.

  Note

    When FEAT_S2FWB is implemented, the architecture requires that
    this field is zero so that no levels of data cache need to be
    cleaned in order to manage coherency with instruction fetches.

LoUIS, bits [23:21] - Level of Unification Inner Shareable for the
                      cache hierarchy.

  Note

    When FEAT_S2FWB is implemented, the architecture requires that
    this field is zero so that no levels of data cache need to be
    cleaned in order to manage coherency with instruction fetches.

It is not really clear what user have to do if assertion fires. Having
assertions about the CPU design like this inspire even more assertions
to be added and the kernel definitely is not the right place for that,
so let's remove cpu_has_fwb() altogether.

Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com>
Link: https://lore.kernel.org/r/20220224164739.119168-1-vladimir.murzin@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64: Add support of PAuth QARMA3 architected algorithm
Vladimir Murzin [Thu, 24 Feb 2022 12:49:52 +0000 (12:49 +0000)]
arm64: Add support of PAuth QARMA3 architected algorithm

QARMA3 is relaxed version of the QARMA5 algorithm which expected to
reduce the latency of calculation while still delivering a suitable
level of security.

Support for QARMA3 can be discovered via ID_AA64ISAR2_EL1

    APA3, bits [15:12] Indicates whether the QARMA3 algorithm is
                       implemented in the PE for address
                       authentication in AArch64 state.

    GPA3, bits [11:8]  Indicates whether the QARMA3 algorithm is
                       implemented in the PE for generic code
                       authentication in AArch64 state.

Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com>
Acked-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20220224124952.119612-4-vladimir.murzin@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64: cpufeature: Mark existing PAuth architected algorithm as QARMA5
Vladimir Murzin [Thu, 24 Feb 2022 12:49:51 +0000 (12:49 +0000)]
arm64: cpufeature: Mark existing PAuth architected algorithm as QARMA5

In preparation of supporting PAuth QARMA3 architected algorithm mark
existing one as QARMA5, so we can distingwish between two.

Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com>
Acked-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20220224124952.119612-3-vladimir.murzin@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64: cpufeature: Account min_field_value when cheking secondaries for PAuth
Vladimir Murzin [Thu, 24 Feb 2022 12:49:50 +0000 (12:49 +0000)]
arm64: cpufeature: Account min_field_value when cheking secondaries for PAuth

In case, both boot_val and sec_val have value below min_field_value we
would wrongly report that address authentication is supported. It is
not a big issue because we enable address authentication based on boot
cpu (and check there is correct).

Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com>
Acked-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20220224124952.119612-2-vladimir.murzin@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64: Change elfcore for_each_mte_vma() to use VMA iterator
Liam Howlett [Fri, 18 Feb 2022 02:37:04 +0000 (02:37 +0000)]
arm64: Change elfcore for_each_mte_vma() to use VMA iterator

Rework for_each_mte_vma() to use a VMA iterator instead of an explicit
linked-list. This will allow easy integration with the maple tree work
which removes the VMA list altogether.

Signed-off-by: Liam R. Howlett <Liam.Howlett@oracle.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20220218023650.672072-1-Liam.Howlett@oracle.com
[will: Folded in fix from Catalin]
Link: https://lore.kernel.org/r/YhUcywqIhmHvX6dG@arm.com
Signed-off--by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64: Use the clearbhb instruction in mitigations
James Morse [Fri, 10 Dec 2021 14:32:56 +0000 (14:32 +0000)]
arm64: Use the clearbhb instruction in mitigations

Future CPUs may implement a clearbhb instruction that is sufficient
to mitigate SpectreBHB. CPUs that implement this instruction, but
not CSV2.3 must be affected by Spectre-BHB.

Add support to use this instruction as the BHB mitigation on CPUs
that support it. The instruction is in the hint space, so it will
be treated by a NOP as older CPUs.

Reviewed-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: James Morse <james.morse@arm.com>
2 years agoKVM: arm64: Allow SMCCC_ARCH_WORKAROUND_3 to be discovered and migrated
James Morse [Fri, 10 Dec 2021 11:16:18 +0000 (11:16 +0000)]
KVM: arm64: Allow SMCCC_ARCH_WORKAROUND_3 to be discovered and migrated

KVM allows the guest to discover whether the ARCH_WORKAROUND SMCCC are
implemented, and to preserve that state during migration through its
firmware register interface.

Add the necessary boiler plate for SMCCC_ARCH_WORKAROUND_3.

Reviewed-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: James Morse <james.morse@arm.com>
2 years agoarm64: Mitigate spectre style branch history side channels
James Morse [Wed, 10 Nov 2021 14:48:00 +0000 (14:48 +0000)]
arm64: Mitigate spectre style branch history side channels

Speculation attacks against some high-performance processors can
make use of branch history to influence future speculation.
When taking an exception from user-space, a sequence of branches
or a firmware call overwrites or invalidates the branch history.

The sequence of branches is added to the vectors, and should appear
before the first indirect branch. For systems using KPTI the sequence
is added to the kpti trampoline where it has a free register as the exit
from the trampoline is via a 'ret'. For systems not using KPTI, the same
register tricks are used to free up a register in the vectors.

For the firmware call, arch-workaround-3 clobbers 4 registers, so
there is no choice but to save them to the EL1 stack. This only happens
for entry from EL0, so if we take an exception due to the stack access,
it will not become re-entrant.

For KVM, the existing branch-predictor-hardening vectors are used.
When a spectre version of these vectors is in use, the firmware call
is sufficient to mitigate against Spectre-BHB. For the non-spectre
versions, the sequence of branches is added to the indirect vector.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: James Morse <james.morse@arm.com>
2 years agoarm64/hugetlb: Define __hugetlb_valid_size()
Anshuman Khandual [Thu, 17 Feb 2022 04:52:37 +0000 (10:22 +0530)]
arm64/hugetlb: Define __hugetlb_valid_size()

arch_hugetlb_valid_size() can be just factored out to create another helper
to be used in arch_hugetlb_migration_supported() as well. This just defines
__hugetlb_valid_size() for that purpose.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kernel@vger.kernel.org
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
Link: https://lore.kernel.org/r/1645073557-6150-1-git-send-email-anshuman.khandual@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64: mte: avoid clearing PSTATE.TCO on entry unless necessary
Peter Collingbourne [Sat, 19 Feb 2022 01:29:45 +0000 (17:29 -0800)]
arm64: mte: avoid clearing PSTATE.TCO on entry unless necessary

On some microarchitectures, clearing PSTATE.TCO is expensive. Clearing
TCO is only necessary if in-kernel MTE is enabled, or if MTE is
enabled in the userspace process in synchronous (or, soon, asymmetric)
mode, because we do not report uaccess faults to userspace in none
or asynchronous modes. Therefore, adjust the kernel entry code to
clear TCO only if necessary.

Because it is now possible to switch to a task in which TCO needs to
be clear from a task in which TCO is set, we also need to do the same
thing on task switch.

Signed-off-by: Peter Collingbourne <pcc@google.com>
Link: https://linux-review.googlesource.com/id/I52d82a580bd0500d420be501af2c35fa8c90729e
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20220219012945.894950-2-pcc@google.com
Signed-off-by: Will Deacon <will@kernel.org>
2 years agokasan: split kasan_*enabled() functions into a separate header
Peter Collingbourne [Sat, 19 Feb 2022 01:29:44 +0000 (17:29 -0800)]
kasan: split kasan_*enabled() functions into a separate header

In an upcoming commit we are going to need to call
kasan_hw_tags_enabled() from arch/arm64/include/asm/mte.h. This
would create a circular dependency between headers if KASAN_GENERIC
or KASAN_SW_TAGS is enabled: linux/kasan.h -> linux/pgtable.h ->
asm/pgtable.h -> asm/mte.h -> linux/kasan.h. Break the cycle
by introducing a new header linux/kasan-enabled.h with the
kasan_*enabled() functions that can be included from asm/mte.h.

Link: https://linux-review.googlesource.com/id/I5b0d96c6ed0026fc790899e14d42b2fac6ab568e
Signed-off-by: Peter Collingbourne <pcc@google.com>
Reviewed-by: Andrey Konovalov <andreyknvl@gmail.com>
Link: https://lore.kernel.org/r/20220219012945.894950-1-pcc@google.com
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64: insn: add encoders for atomic operations
Hou Tao [Thu, 17 Feb 2022 07:22:30 +0000 (15:22 +0800)]
arm64: insn: add encoders for atomic operations

It is a preparation patch for eBPF atomic supports under arm64. eBPF
needs support atomic[64]_fetch_add, atomic[64]_[fetch_]{and,or,xor} and
atomic[64]_{xchg|cmpxchg}. The ordering semantics of eBPF atomics are
the same with the implementations in linux kernel.

Add three helpers to support LDCLR/LDEOR/LDSET/SWP, CAS and DMB
instructions. STADD/STCLR/STEOR/STSET are simply encoded as aliases for
LDADD/LDCLR/LDEOR/LDSET with XZR as the destination register, so no extra
helper is added. atomic_fetch_add() and other atomic ops needs support for
STLXR instruction, so extend enum aarch64_insn_ldst_type to do that.

LDADD/LDEOR/LDSET/SWP and CAS instructions are only available when LSE
atomics is enabled, so just return AARCH64_BREAK_FAULT directly in
these newly-added helpers if CONFIG_ARM64_LSE_ATOMICS is disabled.

Signed-off-by: Hou Tao <houtao1@huawei.com>
Link: https://lore.kernel.org/r/20220217072232.1186625-3-houtao1@huawei.com
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64: move AARCH64_BREAK_FAULT into insn-def.h
Hou Tao [Thu, 17 Feb 2022 07:22:29 +0000 (15:22 +0800)]
arm64: move AARCH64_BREAK_FAULT into insn-def.h

If CONFIG_ARM64_LSE_ATOMICS is off, encoders for LSE-related instructions
can return AARCH64_BREAK_FAULT directly in insn.h. In order to access
AARCH64_BREAK_FAULT in insn.h, we can not include debug-monitors.h in
insn.h, because debug-monitors.h has already depends on insn.h, so just
move AARCH64_BREAK_FAULT into insn-def.h.

It will be used by the following patch to eliminate unnecessary LSE-related
encoders when CONFIG_ARM64_LSE_ATOMICS is off.

Signed-off-by: Hou Tao <houtao1@huawei.com>
Link: https://lore.kernel.org/r/20220217072232.1186625-2-houtao1@huawei.com
Signed-off-by: Will Deacon <will@kernel.org>
2 years agolinkage: remove SYM_FUNC_{START,END}_ALIAS()
Mark Rutland [Wed, 16 Feb 2022 16:22:29 +0000 (16:22 +0000)]
linkage: remove SYM_FUNC_{START,END}_ALIAS()

Now that all aliases are defined using SYM_FUNC_ALIAS(), remove the old
SYM_FUNC_{START,END}_ALIAS() macros.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Josh Poimboeuf <jpoimboe@redhat.com>
Acked-by: Mark Brown <broonie@kernel.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Jiri Slaby <jslaby@suse.cz>
Cc: Peter Zijlstra <peterz@infradead.org>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/20220216162229.1076788-5-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 years agox86: clean up symbol aliasing
Mark Rutland [Wed, 16 Feb 2022 16:22:28 +0000 (16:22 +0000)]
x86: clean up symbol aliasing

Now that we have SYM_FUNC_ALIAS() and SYM_FUNC_ALIAS_WEAK(), use those
to simplify the definition of function aliases across arch/x86.

For clarity, where there are multiple annotations such as
EXPORT_SYMBOL(), I've tried to keep annotations grouped by symbol. For
example, where a function has a name and an alias which are both
exported, this is organised as:

SYM_FUNC_START(func)
    ... asm insns ...
SYM_FUNC_END(func)
EXPORT_SYMBOL(func)

SYM_FUNC_ALIAS(alias, func)
EXPORT_SYMBOL(alias)

Where there are only aliases and no exports or other annotations, I have
not bothered with line spacing, e.g.

SYM_FUNC_START(func)
    ... asm insns ...
SYM_FUNC_END(func)
SYM_FUNC_ALIAS(alias, func)

The tools/perf/ copies of memset_64.S and memset_64.S are updated
likewise to avoid the build system complaining these are mismatched:

| Warning: Kernel ABI header at 'tools/arch/x86/lib/memcpy_64.S' differs from latest version at 'arch/x86/lib/memcpy_64.S'
| diff -u tools/arch/x86/lib/memcpy_64.S arch/x86/lib/memcpy_64.S
| Warning: Kernel ABI header at 'tools/arch/x86/lib/memset_64.S' differs from latest version at 'arch/x86/lib/memset_64.S'
| diff -u tools/arch/x86/lib/memset_64.S arch/x86/lib/memset_64.S

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Josh Poimboeuf <jpoimboe@redhat.com>
Acked-by: Mark Brown <broonie@kernel.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Jiri Slaby <jslaby@suse.cz>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/20220216162229.1076788-4-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64: clean up symbol aliasing
Mark Rutland [Wed, 16 Feb 2022 16:22:27 +0000 (16:22 +0000)]
arm64: clean up symbol aliasing

Now that we have SYM_FUNC_ALIAS() and SYM_FUNC_ALIAS_WEAK(), use those
to simplify and more consistently define function aliases across
arch/arm64.

Aliases are now defined in terms of a canonical function name. For
position-independent functions I've made the __pi_<func> name the
canonical name, and defined other alises in terms of this.

The SYM_FUNC_{START,END}_PI(func) macros obscure the __pi_<func> name,
and make this hard to seatch for. The SYM_FUNC_START_WEAK_PI() macro
also obscures the fact that the __pi_<func> fymbol is global and the
<func> symbol is weak. For clarity, I have removed these macros and used
SYM_FUNC_{START,END}() directly with the __pi_<func> name.

For example:

SYM_FUNC_START_WEAK_PI(func)
... asm insns ...
SYM_FUNC_END_PI(func)
EXPORT_SYMBOL(func)

... becomes:

SYM_FUNC_START(__pi_func)
... asm insns ...
SYM_FUNC_END(__pi_func)

SYM_FUNC_ALIAS_WEAK(func, __pi_func)
EXPORT_SYMBOL(func)

For clarity, where there are multiple annotations such as
EXPORT_SYMBOL(), I've tried to keep annotations grouped by symbol. For
example, where a function has a name and an alias which are both
exported, this is organised as:

SYM_FUNC_START(func)
... asm insns ...
SYM_FUNC_END(func)
EXPORT_SYMBOL(func)

SYM_FUNC_ALIAS(alias, func)
EXPORT_SYMBOL(alias)

For consistency with the other string functions, I've defined strrchr as
a position-independent function, as it can safely be used as such even
though we have no users today.

As we no longer use SYM_FUNC_{START,END}_ALIAS(), our local copies are
removed. The common versions will be removed by a subsequent patch.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Josh Poimboeuf <jpoimboe@redhat.com>
Acked-by: Mark Brown <broonie@kernel.org>
Cc: Joey Gouly <joey.gouly@arm.com>
Cc: Will Deacon <will@kernel.org>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/20220216162229.1076788-3-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 years agolinkage: add SYM_FUNC_ALIAS{,_LOCAL,_WEAK}()
Mark Rutland [Wed, 16 Feb 2022 16:22:26 +0000 (16:22 +0000)]
linkage: add SYM_FUNC_ALIAS{,_LOCAL,_WEAK}()

Currently aliasing an asm function requires adding START and END
annotations for each name, as per Documentation/asm-annotations.rst:

SYM_FUNC_START_ALIAS(__memset)
SYM_FUNC_START(memset)
    ... asm insns ...
SYM_FUNC_END(memset)
SYM_FUNC_END_ALIAS(__memset)

This is more painful than necessary to maintain, especially where a
function has many aliases, some of which we may wish to define
conditionally. For example, arm64's memcpy/memmove implementation (which
uses some arch-specific SYM_*() helpers) has:

SYM_FUNC_START_ALIAS(__memmove)
SYM_FUNC_START_ALIAS_WEAK_PI(memmove)
SYM_FUNC_START_ALIAS(__memcpy)
SYM_FUNC_START_WEAK_PI(memcpy)
    ... asm insns ...
SYM_FUNC_END_PI(memcpy)
EXPORT_SYMBOL(memcpy)
SYM_FUNC_END_ALIAS(__memcpy)
EXPORT_SYMBOL(__memcpy)
SYM_FUNC_END_ALIAS_PI(memmove)
EXPORT_SYMBOL(memmove)
SYM_FUNC_END_ALIAS(__memmove)
EXPORT_SYMBOL(__memmove)
SYM_FUNC_START(name)

It would be much nicer if we could define the aliases *after* the
standard function definition. This would avoid the need to specify each
symbol name twice, and would make it easier to spot the canonical
function definition.

This patch adds new macros to allow us to do so, which allows the above
example to be rewritten more succinctly as:

SYM_FUNC_START(__pi_memcpy)
    ... asm insns ...
SYM_FUNC_END(__pi_memcpy)

SYM_FUNC_ALIAS(__memcpy, __pi_memcpy)
EXPORT_SYMBOL(__memcpy)
SYM_FUNC_ALIAS_WEAK(memcpy, __memcpy)
EXPORT_SYMBOL(memcpy)

SYM_FUNC_ALIAS(__pi_memmove, __pi_memcpy)
SYM_FUNC_ALIAS(__memmove, __pi_memmove)
EXPORT_SYMBOL(__memmove)
SYM_FUNC_ALIAS_WEAK(memmove, __memmove)
EXPORT_SYMBOL(memmove)

The reduction in duplication will also make it possible to replace some
uses of WEAK with more accurate Kconfig guards, e.g.

#ifndef CONFIG_KASAN
SYM_FUNC_ALIAS(memmove, __memmove)
EXPORT_SYMBOL(memmove)
#endif

... which should make it easier to ensure that symbols are neither used
nor overidden unexpectedly.

The existing SYM_FUNC_START_ALIAS() and SYM_FUNC_START_LOCAL_ALIAS() are
marked as deprecated, and will be removed once existing users are moved
over to the new scheme.

The tools/perf/ copy of linkage.h is updated to match. A subsequent
patch will depend upon this when updating the x86 asm annotations.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Josh Poimboeuf <jpoimboe@redhat.com>
Acked-by: Mark Brown <broonie@kernel.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Jiri Slaby <jslaby@suse.cz>
Cc: Peter Zijlstra <peterz@infradead.org>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/20220216162229.1076788-2-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64: proton-pack: Report Spectre-BHB vulnerabilities as part of Spectre-v2
James Morse [Tue, 8 Feb 2022 16:08:13 +0000 (16:08 +0000)]
arm64: proton-pack: Report Spectre-BHB vulnerabilities as part of Spectre-v2

Speculation attacks against some high-performance processors can
make use of branch history to influence future speculation as part of
a spectre-v2 attack. This is not mitigated by CSV2, meaning CPUs that
previously reported 'Not affected' are now moderately mitigated by CSV2.

Update the value in /sys/devices/system/cpu/vulnerabilities/spectre_v2
to also show the state of the BHB mitigation.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: James Morse <james.morse@arm.com>
2 years agoarm64: Add percpu vectors for EL1
James Morse [Tue, 23 Nov 2021 18:29:25 +0000 (18:29 +0000)]
arm64: Add percpu vectors for EL1

The Spectre-BHB workaround adds a firmware call to the vectors. This
is needed on some CPUs, but not others. To avoid the unaffected CPU in
a big/little pair from making the firmware call, create per cpu vectors.

The per-cpu vectors only apply when returning from EL0.

Systems using KPTI can use the canonical 'full-fat' vectors directly at
EL1, the trampoline exit code will switch to this_cpu_vector on exit to
EL0. Systems not using KPTI should always use this_cpu_vector.

this_cpu_vector will point at a vector in tramp_vecs or
__bp_harden_el1_vectors, depending on whether KPTI is in use.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: James Morse <james.morse@arm.com>
2 years agoarm64: entry: Add macro for reading symbol addresses from the trampoline
James Morse [Thu, 25 Nov 2021 14:25:34 +0000 (14:25 +0000)]
arm64: entry: Add macro for reading symbol addresses from the trampoline

The trampoline code needs to use the address of symbols in the wider
kernel, e.g. vectors. PC-relative addressing wouldn't work as the
trampoline code doesn't run at the address the linker expected.

tramp_ventry uses a literal pool, unless CONFIG_RANDOMIZE_BASE is
set, in which case it uses the data page as a literal pool because
the data page can be unmapped when running in user-space, which is
required for CPUs vulnerable to meltdown.

Pull this logic out as a macro, instead of adding a third copy
of it.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: James Morse <james.morse@arm.com>
2 years agoarm64: entry: Add vectors that have the bhb mitigation sequences
James Morse [Thu, 18 Nov 2021 13:59:46 +0000 (13:59 +0000)]
arm64: entry: Add vectors that have the bhb mitigation sequences

Some CPUs affected by Spectre-BHB need a sequence of branches, or a
firmware call to be run before any indirect branch. This needs to go
in the vectors. No CPU needs both.

While this can be patched in, it would run on all CPUs as there is a
single set of vectors. If only one part of a big/little combination is
affected, the unaffected CPUs have to run the mitigation too.

Create extra vectors that include the sequence. Subsequent patches will
allow affected CPUs to select this set of vectors. Later patches will
modify the loop count to match what the CPU requires.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: James Morse <james.morse@arm.com>
2 years agoarm64: mte: Document the core dump file format
Catalin Marinas [Mon, 31 Jan 2022 16:54:56 +0000 (16:54 +0000)]
arm64: mte: Document the core dump file format

Add the program header definition and data layout for the
PT_ARM_MEMTAG_MTE segments.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Luis Machado <luis.machado@linaro.org>
Link: https://lore.kernel.org/r/20220131165456.2160675-6-catalin.marinas@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64: mte: Dump the MTE tags in the core file
Catalin Marinas [Mon, 31 Jan 2022 16:54:55 +0000 (16:54 +0000)]
arm64: mte: Dump the MTE tags in the core file

For each vma mapped with PROT_MTE (the VM_MTE flag set), generate a
PT_ARM_MEMTAG_MTE segment in the core file and dump the corresponding
tags. The in-file size for such segments is 128 bytes per page.

For pages in a VM_MTE vma which are not present in the user page tables
or don't have the PG_mte_tagged flag set (e.g. execute-only), just write
zeros in the core file.

An example of program headers for two vmas, one 2-page, the other 4-page
long:

  Type           Offset   VirtAddr           PhysAddr           FileSiz  MemSiz   Flg Align
  ...
  LOAD           0x030000 0x0000ffff80034000 0x0000000000000000 0x000000 0x002000 RW  0x1000
  LOAD           0x030000 0x0000ffff80036000 0x0000000000000000 0x004000 0x004000 RW  0x1000
  ...
  LOPROC+0x1     0x05b000 0x0000ffff80034000 0x0000000000000000 0x000100 0x002000     0
  LOPROC+0x1     0x05b100 0x0000ffff80036000 0x0000000000000000 0x000200 0x004000     0

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Luis Machado <luis.machado@linaro.org>
Reviewed-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20220131165456.2160675-5-catalin.marinas@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64: mte: Define the number of bytes for storing the tags in a page
Catalin Marinas [Mon, 31 Jan 2022 16:54:54 +0000 (16:54 +0000)]
arm64: mte: Define the number of bytes for storing the tags in a page

Rather than explicitly calculating the number of bytes for a compact tag
storage format corresponding to a page, just add a MTE_PAGE_TAG_STORAGE
macro. With the current MTE implementation of 4 bits per tag, we store
2 tags in a byte.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Luis Machado <luis.machado@linaro.org>
Link: https://lore.kernel.org/r/20220131165456.2160675-4-catalin.marinas@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoelf: Introduce the ARM MTE ELF segment type
Catalin Marinas [Mon, 31 Jan 2022 16:54:53 +0000 (16:54 +0000)]
elf: Introduce the ARM MTE ELF segment type

Memory tags will be dumped in the core file as segments with their own
type. Discussions with the binutils and the generic ABI community
settled on using new definitions in the PT_*PROC space (and to be
documented in the processor-specific ABIs).

Introduce PT_ARM_MEMTAG_MTE as (PT_LOPROC + 0x1). Not included in this
patch since there is no upstream support but the CHERI/BSD community
will also reserve:

  #define PT_ARM_MEMTAG_CHERI    (PT_LOPROC + 0x2)
  #define PT_RISCV_MEMTAG_CHERI  (PT_LOPROC + 0x3)

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Luis Machado <luis.machado@linaro.org>
Link: https://lore.kernel.org/r/20220131165456.2160675-3-catalin.marinas@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoelfcore: Replace CONFIG_{IA64, UML} checks with a new option
Catalin Marinas [Mon, 31 Jan 2022 16:54:52 +0000 (16:54 +0000)]
elfcore: Replace CONFIG_{IA64, UML} checks with a new option

As arm64 is about to introduce MTE-specific phdrs in the core dump, add
a common CONFIG_ARCH_BINFMT_ELF_EXTRA_PHDRS option currently selectable
by UML_X86 and IA64.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Eric Biederman <ebiederm@xmission.com>
Link: https://lore.kernel.org/r/20220131165456.2160675-2-catalin.marinas@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64: atomics: remove redundant static branch
Mark Rutland [Fri, 4 Feb 2022 10:44:39 +0000 (10:44 +0000)]
arm64: atomics: remove redundant static branch

Due to a historical oversight, we emit a redundant static branch for
each atomic/atomic64 operation when CONFIG_ARM64_LSE_ATOMICS is
selected. We can safely remove this, making the kernel Image reasonably
smaller.

When CONFIG_ARM64_LSE_ATOMICS is selected, every LSE atomic operation
has two preceding static branches with the same target, e.g.

b f7c <kernel_init_freeable+0xa4>
b f7c <kernel_init_freeable+0xa4>
mov w0, #0x1                    // #1
ldadd w0, w0, [x19]

This is because the __lse_ll_sc_body() wrapper uses
system_uses_lse_atomics(), which checks both `arm64_const_caps_ready`
and `cpu_hwcap_keys[ARM64_HAS_LSE_ATOMICS]`, each of which emits a
static branch. This has been the case since commit:

  addfc38672c73efd ("arm64: atomics: avoid out-of-line ll/sc atomics")

However, there was never a need to check `arm64_const_caps_ready`, which
was itself introduced in commit:

  63a1e1c95e60e798 ("arm64/cpufeature: don't use mutex in bringup path")

... so that cpus_have_const_cap() could fall back to checking the
`cpu_hwcaps` bitmap prior to the static keys for individual caps
becoming enabled. As system_uses_lse_atomics() doesn't check
`cpu_hwcaps`, and doesn't need to as we can safely use the LL/SC atomics
prior to enabling the `ARM64_HAS_LSE_ATOMICS` static key, it doesn't
need to check `arm64_const_caps_ready`.

This patch removes the `arm64_const_caps_ready` check from
system_uses_lse_atomics(). As the arch_atomic_* routines are meant to be
safely usable in noinstr code, I've also marked
system_uses_lse_atomics() as __always_inline.

This results in one fewer static branch per atomic operation, with the
prior example becoming:

b f78 <kernel_init_freeable+0xa0>
mov w0, #0x1                    // #1
ldadd w0, w0, [x19]

Each static branch consists of the branch itself and an associated
__jump_table entry. Removing these has a reasonable impact on the Image
size, with a GCC 11.1.0 defconfig v5.17-rc2 Image being reduced by
128KiB:

| [mark@lakrids:~/src/linux]% ls -al Image*
| -rw-r--r-- 1 mark mark 34619904 Feb  3 18:24 Image.baseline
| -rw-r--r-- 1 mark mark 34488832 Feb  3 18:33 Image.onebranch

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Suzuki Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20220204104439.270567-1-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64: entry: Add non-kpti __bp_harden_el1_vectors for mitigations
James Morse [Wed, 24 Nov 2021 15:03:15 +0000 (15:03 +0000)]
arm64: entry: Add non-kpti __bp_harden_el1_vectors for mitigations

kpti is an optional feature, for systems not using kpti a set of
vectors for the spectre-bhb mitigations is needed.

Add another set of vectors, __bp_harden_el1_vectors, that will be
used if a mitigation is needed and kpti is not in use.

The EL1 ventries are repeated verbatim as there is no additional
work needed for entry from EL1.

Reviewed-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: James Morse <james.morse@arm.com>
2 years agoarm64: entry: Allow the trampoline text to occupy multiple pages
James Morse [Thu, 18 Nov 2021 15:04:32 +0000 (15:04 +0000)]
arm64: entry: Allow the trampoline text to occupy multiple pages

Adding a second set of vectors to .entry.tramp.text will make it
larger than a single 4K page.

Allow the trampoline text to occupy up to three pages by adding two
more fixmap slots. Previous changes to tramp_valias allowed it to reach
beyond a single page.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: James Morse <james.morse@arm.com>
2 years agoarm64: entry: Make the kpti trampoline's kpti sequence optional
James Morse [Thu, 18 Nov 2021 13:16:23 +0000 (13:16 +0000)]
arm64: entry: Make the kpti trampoline's kpti sequence optional

Spectre-BHB needs to add sequences to the vectors. Having one global
set of vectors is a problem for big/little systems where the sequence
is costly on cpus that are not vulnerable.

Making the vectors per-cpu in the style of KVM's bh_harden_hyp_vecs
requires the vectors to be generated by macros.

Make the kpti re-mapping of the kernel optional, so the macros can be
used without kpti.

Reviewed-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: James Morse <james.morse@arm.com>
2 years agoarm64: entry: Move trampoline macros out of ifdef'd section
James Morse [Thu, 18 Nov 2021 14:02:30 +0000 (14:02 +0000)]
arm64: entry: Move trampoline macros out of ifdef'd section

The macros for building the kpti trampoline are all behind
CONFIG_UNMAP_KERNEL_AT_EL0, and in a region that outputs to the
.entry.tramp.text section.

Move the macros out so they can be used to generate other kinds of
trampoline. Only the symbols need to be guarded by
CONFIG_UNMAP_KERNEL_AT_EL0 and appear in the .entry.tramp.text section.

Reviewed-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: James Morse <james.morse@arm.com>
2 years agoarm64: entry: Don't assume tramp_vectors is the start of the vectors
James Morse [Wed, 24 Nov 2021 13:40:09 +0000 (13:40 +0000)]
arm64: entry: Don't assume tramp_vectors is the start of the vectors

The tramp_ventry macro uses tramp_vectors as the address of the vectors
when calculating which ventry in the 'full fat' vectors to branch to.

While there is one set of tramp_vectors, this will be true.
Adding multiple sets of vectors will break this assumption.

Move the generation of the vectors to a macro, and pass the start
of the vectors as an argument to tramp_ventry.

Reviewed-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: James Morse <james.morse@arm.com>
2 years agoarm64: entry: Allow tramp_alias to access symbols after the 4K boundary
James Morse [Wed, 24 Nov 2021 11:40:18 +0000 (11:40 +0000)]
arm64: entry: Allow tramp_alias to access symbols after the 4K boundary

Systems using kpti enter and exit the kernel through a trampoline mapping
that is always mapped, even when the kernel is not. tramp_valias is a macro
to find the address of a symbol in the trampoline mapping.

Adding extra sets of vectors will expand the size of the entry.tramp.text
section to beyond 4K. tramp_valias will be unable to generate addresses
for symbols beyond 4K as it uses the 12 bit immediate of the add
instruction.

As there are now two registers available when tramp_alias is called,
use the extra register to avoid the 4K limit of the 12 bit immediate.

Reviewed-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: James Morse <james.morse@arm.com>
2 years agoarm64: entry: Move the trampoline data page before the text page
James Morse [Tue, 23 Nov 2021 15:43:31 +0000 (15:43 +0000)]
arm64: entry: Move the trampoline data page before the text page

The trampoline code has a data page that holds the address of the vectors,
which is unmapped when running in user-space. This ensures that with
CONFIG_RANDOMIZE_BASE, the randomised address of the kernel can't be
discovered until after the kernel has been mapped.

If the trampoline text page is extended to include multiple sets of
vectors, it will be larger than a single page, making it tricky to
find the data page without knowing the size of the trampoline text
pages, which will vary with PAGE_SIZE.

Move the data page to appear before the text page. This allows the
data page to be found without knowing the size of the trampoline text
pages. 'tramp_vectors' is used to refer to the beginning of the
.entry.tramp.text section, do that explicitly.

Reviewed-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: James Morse <james.morse@arm.com>
2 years agoarm64: entry: Free up another register on kpti's tramp_exit path
James Morse [Tue, 23 Nov 2021 18:41:43 +0000 (18:41 +0000)]
arm64: entry: Free up another register on kpti's tramp_exit path

Kpti stashes x30 in far_el1 while it uses x30 for all its work.

Making the vectors a per-cpu data structure will require a second
register.

Allow tramp_exit two registers before it unmaps the kernel, by
leaving x30 on the stack, and stashing x29 in far_el1.

Reviewed-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: James Morse <james.morse@arm.com>
2 years agoarm64: entry: Make the trampoline cleanup optional
James Morse [Wed, 24 Nov 2021 15:36:12 +0000 (15:36 +0000)]
arm64: entry: Make the trampoline cleanup optional

Subsequent patches will add additional sets of vectors that use
the same tricks as the kpti vectors to reach the full-fat vectors.
The full-fat vectors contain some cleanup for kpti that is patched
in by alternatives when kpti is in use. Once there are additional
vectors, the cleanup will be needed in more cases.

But on big/little systems, the cleanup would be harmful if no
trampoline vector were in use. Instead of forcing CPUs that don't
need a trampoline vector to use one, make the trampoline cleanup
optional.

Entry at the top of the vectors will skip the cleanup. The trampoline
vectors can then skip the first instruction, triggering the cleanup
to run.

Reviewed-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: James Morse <james.morse@arm.com>
2 years agoKVM: arm64: Allow indirect vectors to be used without SPECTRE_V3A
James Morse [Tue, 16 Nov 2021 15:06:19 +0000 (15:06 +0000)]
KVM: arm64: Allow indirect vectors to be used without SPECTRE_V3A

CPUs vulnerable to Spectre-BHB either need to make an SMC-CC firmware
call from the vectors, or run a sequence of branches. This gets added
to the hyp vectors. If there is no support for arch-workaround-1 in
firmware, the indirect vector will be used.

kvm_init_vector_slots() only initialises the two indirect slots if
the platform is vulnerable to Spectre-v3a. pKVM's hyp_map_vectors()
only initialises __hyp_bp_vect_base if the platform is vulnerable to
Spectre-v3a.

As there are about to more users of the indirect vectors, ensure
their entries in hyp_spectre_vector_selector[] are always initialised,
and __hyp_bp_vect_base defaults to the regular VA mapping.

The Spectre-v3a check is moved to a helper
kvm_system_needs_idmapped_vectors(), and merged with the code
that creates the hyp mappings.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: James Morse <james.morse@arm.com>
2 years agoarm64: spectre: Rename spectre_v4_patch_fw_mitigation_conduit
James Morse [Tue, 16 Nov 2021 15:00:51 +0000 (15:00 +0000)]
arm64: spectre: Rename spectre_v4_patch_fw_mitigation_conduit

The spectre-v4 sequence includes an SMC from the assembly entry code.
spectre_v4_patch_fw_mitigation_conduit is the patching callback that
generates an HVC or SMC depending on the SMCCC conduit type.

As this isn't specific to spectre-v4, rename it
smccc_patch_fw_mitigation_conduit so it can be re-used.

Reviewed-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: James Morse <james.morse@arm.com>
2 years agoarm64: entry.S: Add ventry overflow sanity checks
James Morse [Wed, 17 Nov 2021 15:15:26 +0000 (15:15 +0000)]
arm64: entry.S: Add ventry overflow sanity checks

Subsequent patches add even more code to the ventry slots.
Ensure kernels that overflow a ventry slot don't get built.

Reviewed-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: James Morse <james.morse@arm.com>
2 years agokselftest/arm64: mte: user_mem: test a wider range of values
Joey Gouly [Wed, 9 Feb 2022 15:22:40 +0000 (15:22 +0000)]
kselftest/arm64: mte: user_mem: test a wider range of values

Instead of hard coding a small amount of tests, generate a wider
range of tests to try catch any corner cases that could show up.

These new tests test different MTE tag lengths and offsets, which
previously would have caused infinite loops in the kernel. This was
fixed by 295cf156231c ("arm64: Avoid premature usercopy failure"),
so these are regressions tests for that corner case.

Signed-off-by: Joey Gouly <joey.gouly@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Mark Brown <broonie@kernel.org>
Cc: Shuah Khan <shuah@kernel.org>
Reviewed-by: Mark Brown <broonie@kernel.org>
Tested-by: Mark Brown <broonie@kernel.org>
Reviewed-by: Shuah Khan <skhan@linuxfoundation.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20220209152240.52788-7-joey.gouly@arm.com
Signed-off-by: Will Deacon <will@kernel.org>