platform/kernel/linux-starfive.git
3 years agoarm64: vdso32: drop -no-integrated-as flag
Nick Desaulniers [Tue, 20 Apr 2021 17:44:25 +0000 (10:44 -0700)]
arm64: vdso32: drop -no-integrated-as flag

Clang can assemble these files just fine; this is a relic from the top
level Makefile conditionally adding this. We no longer need --prefix,
--gcc-toolchain, or -Qunused-arguments flags either with this change, so
remove those too.

To test building:
$ ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- \
  CROSS_COMPILE_COMPAT=arm-linux-gnueabi- make LLVM=1 LLVM_IAS=1 \
  defconfig arch/arm64/kernel/vdso32/

Suggested-by: Nathan Chancellor <nathan@kernel.org>
Signed-off-by: Nick Desaulniers <ndesaulniers@google.com>
Reviewed-by: Nathan Chancellor <nathan@kernel.org>
Reviewed-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Tested-by: Stephen Boyd <swboyd@chromium.org>
Acked-by: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20210420174427.230228-1-ndesaulniers@google.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agoMerge branch 'for-next/pac-set-get-enabled-keys' into for-next/core
Catalin Marinas [Thu, 15 Apr 2021 13:00:48 +0000 (14:00 +0100)]
Merge branch 'for-next/pac-set-get-enabled-keys' into for-next/core

* for-next/pac-set-get-enabled-keys:
  : Introduce arm64 prctl(PR_PAC_{SET,GET}_ENABLED_KEYS).
  arm64: pac: Optimize kernel entry/exit key installation code paths
  arm64: Introduce prctl(PR_PAC_{SET,GET}_ENABLED_KEYS)
  arm64: mte: make the per-task SCTLR_EL1 field usable elsewhere

3 years agoMerge branch 'for-next/mte-async-kernel-mode' into for-next/core
Catalin Marinas [Thu, 15 Apr 2021 13:00:47 +0000 (14:00 +0100)]
Merge branch 'for-next/mte-async-kernel-mode' into for-next/core

* for-next/mte-async-kernel-mode:
  : Add MTE asynchronous kernel mode support
  kasan, arm64: tests supports for HW_TAGS async mode
  arm64: mte: Report async tag faults before suspend
  arm64: mte: Enable async tag check fault
  arm64: mte: Conditionally compile mte_enable_kernel_*()
  arm64: mte: Enable TCO in functions that can read beyond buffer limits
  kasan: Add report for async mode
  arm64: mte: Drop arch_enable_tagging()
  kasan: Add KASAN mode kernel parameter
  arm64: mte: Add asynchronous mode support

3 years agoMerge branches 'for-next/misc', 'for-next/kselftest', 'for-next/xntable', 'for-next...
Catalin Marinas [Thu, 15 Apr 2021 13:00:38 +0000 (14:00 +0100)]
Merge branches 'for-next/misc', 'for-next/kselftest', 'for-next/xntable', 'for-next/vdso', 'for-next/fiq', 'for-next/epan', 'for-next/kasan-vmalloc', 'for-next/fgt-boot-init', 'for-next/vhe-only' and 'for-next/neon-softirqs-disabled', remote-tracking branch 'arm64/for-next/perf' into for-next/core

* for-next/misc:
  : Miscellaneous patches
  arm64/sve: Add compile time checks for SVE hooks in generic functions
  arm64/kernel/probes: Use BUG_ON instead of if condition followed by BUG.
  arm64/sve: Remove redundant system_supports_sve() tests
  arm64: mte: Remove unused mte_assign_mem_tag_range()
  arm64: Add __init section marker to some functions
  arm64/sve: Rework SVE access trap to convert state in registers
  docs: arm64: Fix a grammar error
  arm64: smp: Add missing prototype for some smp.c functions
  arm64: setup: name `tcr` register
  arm64: setup: name `mair` register
  arm64: stacktrace: Move start_backtrace() out of the header
  arm64: barrier: Remove spec_bar() macro
  arm64: entry: remove test_irqs_unmasked macro
  ARM64: enable GENERIC_FIND_FIRST_BIT
  arm64: defconfig: Use DEBUG_INFO_REDUCED

* for-next/kselftest:
  : Various kselftests for arm64
  kselftest: arm64: Add BTI tests
  kselftest/arm64: mte: Report filename on failing temp file creation
  kselftest/arm64: mte: Fix clang warning
  kselftest/arm64: mte: Makefile: Fix clang compilation
  kselftest/arm64: mte: Output warning about failing compiler
  kselftest/arm64: mte: Use cross-compiler if specified
  kselftest/arm64: mte: Fix MTE feature detection
  kselftest/arm64: mte: common: Fix write() warnings
  kselftest/arm64: mte: user_mem: Fix write() warning
  kselftest/arm64: mte: ksm_options: Fix fscanf warning
  kselftest/arm64: mte: Fix pthread linking
  kselftest/arm64: mte: Fix compilation with native compiler

* for-next/xntable:
  : Add hierarchical XN permissions for all page tables
  arm64: mm: use XN table mapping attributes for user/kernel mappings
  arm64: mm: use XN table mapping attributes for the linear region
  arm64: mm: add missing P4D definitions and use them consistently

* for-next/vdso:
  : Minor improvements to the compat vdso and sigpage
  arm64: compat: Poison the compat sigpage
  arm64: vdso: Avoid ISB after reading from cntvct_el0
  arm64: compat: Allow signal page to be remapped
  arm64: vdso: Remove redundant calls to flush_dcache_page()
  arm64: vdso: Use GFP_KERNEL for allocating compat vdso and signal pages

* for-next/fiq:
  : Support arm64 FIQ controller registration
  arm64: irq: allow FIQs to be handled
  arm64: Always keep DAIF.[IF] in sync
  arm64: entry: factor irq triage logic into macros
  arm64: irq: rework root IRQ handler registration
  arm64: don't use GENERIC_IRQ_MULTI_HANDLER
  genirq: Allow architectures to override set_handle_irq() fallback

* for-next/epan:
  : Support for Enhanced PAN (execute-only permissions)
  arm64: Support execute-only permissions with Enhanced PAN

* for-next/kasan-vmalloc:
  : Support CONFIG_KASAN_VMALLOC on arm64
  arm64: Kconfig: select KASAN_VMALLOC if KANSAN_GENERIC is enabled
  arm64: kaslr: support randomized module area with KASAN_VMALLOC
  arm64: Kconfig: support CONFIG_KASAN_VMALLOC
  arm64: kasan: abstract _text and _end to KERNEL_START/END
  arm64: kasan: don't populate vmalloc area for CONFIG_KASAN_VMALLOC

* for-next/fgt-boot-init:
  : Booting clarifications and fine grained traps setup
  arm64: Require that system registers at all visible ELs be initialized
  arm64: Disable fine grained traps on boot
  arm64: Document requirements for fine grained traps at boot

* for-next/vhe-only:
  : Dealing with VHE-only CPUs (a.k.a. M1)
  arm64: Get rid of CONFIG_ARM64_VHE
  arm64: Cope with CPUs stuck in VHE mode
  arm64: cpufeature: Allow early filtering of feature override

* arm64/for-next/perf:
  arm64: perf: Remove redundant initialization in perf_event.c
  perf/arm_pmu_platform: Clean up with dev_printk
  perf/arm_pmu_platform: Fix error handling
  perf/arm_pmu_platform: Use dev_err_probe() for IRQ errors
  docs: perf: Address some html build warnings
  docs: perf: Add new description on HiSilicon uncore PMU v2
  drivers/perf: hisi: Add support for HiSilicon PA PMU driver
  drivers/perf: hisi: Add support for HiSilicon SLLC PMU driver
  drivers/perf: hisi: Update DDRC PMU for programmable counter
  drivers/perf: hisi: Add new functions for HHA PMU
  drivers/perf: hisi: Add new functions for L3C PMU
  drivers/perf: hisi: Add PMU version for uncore PMU drivers.
  drivers/perf: hisi: Refactor code for more uncore PMUs
  drivers/perf: hisi: Remove unnecessary check of counter index
  drivers/perf: Simplify the SMMUv3 PMU event attributes
  drivers/perf: convert sysfs sprintf family to sysfs_emit
  drivers/perf: convert sysfs scnprintf family to sysfs_emit_at() and sysfs_emit()
  drivers/perf: convert sysfs snprintf family to sysfs_emit

* for-next/neon-softirqs-disabled:
  : Run kernel mode SIMD with softirqs disabled
  arm64: fpsimd: run kernel mode NEON with softirqs disabled
  arm64: assembler: introduce wxN aliases for wN registers
  arm64: assembler: remove conditional NEON yield macros

3 years agoarm64/sve: Add compile time checks for SVE hooks in generic functions
Mark Brown [Thu, 15 Apr 2021 12:17:42 +0000 (13:17 +0100)]
arm64/sve: Add compile time checks for SVE hooks in generic functions

The FPSIMD code was relying on IS_ENABLED() checks in system_suppors_sve()
to cause the compiler to delete references to SVE functions in some places,
add explicit IS_ENABLED() checks back.

Fixes: ef9c5d09797d ("arm64/sve: Remove redundant system_supports_sve() tests")
Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20210415121742.36628-1-broonie@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agoarm64/kernel/probes: Use BUG_ON instead of if condition followed by BUG.
zhouchuangao [Tue, 30 Mar 2021 11:57:50 +0000 (04:57 -0700)]
arm64/kernel/probes: Use BUG_ON instead of if condition followed by BUG.

It can be optimized at compile time.

Signed-off-by: zhouchuangao <zhouchuangao@vivo.com>
Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org>
Link: https://lore.kernel.org/r/1617105472-6081-1-git-send-email-zhouchuangao@vivo.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agoarm64: pac: Optimize kernel entry/exit key installation code paths
Peter Collingbourne [Fri, 19 Mar 2021 03:10:54 +0000 (20:10 -0700)]
arm64: pac: Optimize kernel entry/exit key installation code paths

The kernel does not use any keys besides IA so we don't need to
install IB/DA/DB/GA on kernel exit if we arrange to install them
on task switch instead, which we can expect to happen an order of
magnitude less often.

Furthermore we can avoid installing the user IA in the case where the
user task has IA disabled and just leave the kernel IA installed. This
also lets us avoid needing to install IA on kernel entry.

On an Apple M1 under a hypervisor, the overhead of kernel entry/exit
has been measured to be reduced by 15.6ns in the case where IA is
enabled, and 31.9ns in the case where IA is disabled.

Signed-off-by: Peter Collingbourne <pcc@google.com>
Link: https://linux-review.googlesource.com/id/Ieddf6b580d23c9e0bed45a822dabe72d2ffc9a8e
Link: https://lore.kernel.org/r/2d653d055f38f779937f2b92f8ddd5cf9e4af4f4.1616123271.git.pcc@google.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agoarm64: Introduce prctl(PR_PAC_{SET,GET}_ENABLED_KEYS)
Peter Collingbourne [Fri, 19 Mar 2021 03:10:53 +0000 (20:10 -0700)]
arm64: Introduce prctl(PR_PAC_{SET,GET}_ENABLED_KEYS)

This change introduces a prctl that allows the user program to control
which PAC keys are enabled in a particular task. The main reason
why this is useful is to enable a userspace ABI that uses PAC to
sign and authenticate function pointers and other pointers exposed
outside of the function, while still allowing binaries conforming
to the ABI to interoperate with legacy binaries that do not sign or
authenticate pointers.

The idea is that a dynamic loader or early startup code would issue
this prctl very early after establishing that a process may load legacy
binaries, but before executing any PAC instructions.

This change adds a small amount of overhead to kernel entry and exit
due to additional required instruction sequences.

On a DragonBoard 845c (Cortex-A75) with the powersave governor, the
overhead of similar instruction sequences was measured as 4.9ns when
simulating the common case where IA is left enabled, or 43.7ns when
simulating the uncommon case where IA is disabled. These numbers can
be seen as the worst case scenario, since in more realistic scenarios
a better performing governor would be used and a newer chip would be
used that would support PAC unlike Cortex-A75 and would be expected
to be faster than Cortex-A75.

On an Apple M1 under a hypervisor, the overhead of the entry/exit
instruction sequences introduced by this patch was measured as 0.3ns
in the case where IA is left enabled, and 33.0ns in the case where
IA is disabled.

Signed-off-by: Peter Collingbourne <pcc@google.com>
Reviewed-by: Dave Martin <Dave.Martin@arm.com>
Link: https://linux-review.googlesource.com/id/Ibc41a5e6a76b275efbaa126b31119dc197b927a5
Link: https://lore.kernel.org/r/d6609065f8f40397a4124654eb68c9f490b4d477.1616123271.git.pcc@google.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agoarm64: mte: make the per-task SCTLR_EL1 field usable elsewhere
Peter Collingbourne [Fri, 19 Mar 2021 03:10:52 +0000 (20:10 -0700)]
arm64: mte: make the per-task SCTLR_EL1 field usable elsewhere

In an upcoming change we are going to introduce per-task SCTLR_EL1
bits for PAC. Move the existing per-task SCTLR_EL1 field out of the
MTE-specific code so that we will be able to use it from both the
PAC and MTE code paths and make the task switching code more efficient.

Signed-off-by: Peter Collingbourne <pcc@google.com>
Link: https://linux-review.googlesource.com/id/Ic65fac78a7926168fa68f9e8da591c9e04ff7278
Link: https://lore.kernel.org/r/13d725cb8e741950fb9d6e64b2cd9bd54ff7c3f9.1616123271.git.pcc@google.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agoarm64/sve: Remove redundant system_supports_sve() tests
Mark Brown [Mon, 12 Apr 2021 17:23:20 +0000 (18:23 +0100)]
arm64/sve: Remove redundant system_supports_sve() tests

Currently there are a number of places in the SVE code where we check both
system_supports_sve() and TIF_SVE. This is a bit redundant given that we
should never get into a situation where we have set TIF_SVE without having
SVE support and it is not clear that silently ignoring a mistakenly set
TIF_SVE flag is the most sensible error handling approach. For now let's
just drop the system_supports_sve() checks since this will at least reduce
overhead a little.

Signed-off-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20210412172320.3315-1-broonie@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agoarm64: fpsimd: run kernel mode NEON with softirqs disabled
Ard Biesheuvel [Tue, 2 Mar 2021 09:01:12 +0000 (10:01 +0100)]
arm64: fpsimd: run kernel mode NEON with softirqs disabled

Kernel mode NEON can be used in task or softirq context, but only in
a non-nesting manner, i.e., softirq context is only permitted if the
interrupt was not taken at a point where the kernel was using the NEON
in task context.

This means all users of kernel mode NEON have to be aware of this
limitation, and either need to provide scalar fallbacks that may be much
slower (up to 20x for AES instructions) and potentially less safe, or
use an asynchronous interface that defers processing to a later time
when the NEON is guaranteed to be available.

Given that grabbing and releasing the NEON is cheap, we can relax this
restriction, by increasing the granularity of kernel mode NEON code, and
always disabling softirq processing while the NEON is being used in task
context.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20210302090118.30666-4-ardb@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agoarm64: assembler: introduce wxN aliases for wN registers
Ard Biesheuvel [Tue, 2 Mar 2021 09:01:11 +0000 (10:01 +0100)]
arm64: assembler: introduce wxN aliases for wN registers

The AArch64 asm syntax has this slightly tedious property that the names
used in mnemonics to refer to registers depend on whether the opcode in
question targets the entire 64-bits (xN), or only the least significant
8, 16 or 32 bits (wN). When writing parameterized code such as macros,
this can be annoying, as macro arguments don't lend themselves to
indexed lookups, and so generating a reference to wN in a macro that
receives xN as an argument is problematic.

For instance, an upcoming patch that modifies the implementation of the
cond_yield macro to be able to refer to 32-bit registers would need to
modify invocations such as

  cond_yield 3f, x8

to

  cond_yield 3f, 8

so that the second argument can be token pasted after x or w to emit the
correct register reference. Unfortunately, this interferes with the self
documenting nature of the first example, where the second argument is
obviously a register, whereas in the second example, one would need to
go and look at the code to find out what '8' means.

So let's fix this by defining wxN aliases for all xN registers, which
resolve to the 32-bit alias of each respective 64-bit register. This
allows the macro implementation to paste the xN reference after a w to
obtain the correct register name.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20210302090118.30666-3-ardb@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agoarm64: assembler: remove conditional NEON yield macros
Ard Biesheuvel [Tue, 2 Mar 2021 09:01:10 +0000 (10:01 +0100)]
arm64: assembler: remove conditional NEON yield macros

The users of the conditional NEON yield macros have all been switched to
the simplified cond_yield macro, and so the NEON specific ones can be
removed.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20210302090118.30666-2-ardb@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agokasan, arm64: tests supports for HW_TAGS async mode
Andrey Konovalov [Mon, 15 Mar 2021 13:20:19 +0000 (13:20 +0000)]
kasan, arm64: tests supports for HW_TAGS async mode

This change adds KASAN-KUnit tests support for the async HW_TAGS mode.

In async mode, tag fault aren't being generated synchronously when a
bad access happens, but are instead explicitly checked for by the kernel.

As each KASAN-KUnit test expect a fault to happen before the test is over,
check for faults as a part of the test handler.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Andrey Konovalov <andreyknvl@google.com>
Tested-by: Andrey Konovalov <andreyknvl@google.com>
Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Link: https://lore.kernel.org/r/20210315132019.33202-10-vincenzo.frascino@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agoarm64: mte: Report async tag faults before suspend
Vincenzo Frascino [Mon, 15 Mar 2021 13:20:18 +0000 (13:20 +0000)]
arm64: mte: Report async tag faults before suspend

When MTE async mode is enabled TFSR_EL1 contains the accumulative
asynchronous tag check faults for EL1 and EL0.

During the suspend/resume operations the firmware might perform some
operations that could change the state of the register resulting in
a spurious tag check fault report.

Report asynchronous tag faults before suspend and clear the TFSR_EL1
register after resume to prevent this to happen.

Cc: Will Deacon <will@kernel.org>
Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Acked-by: Andrey Konovalov <andreyknvl@google.com>
Tested-by: Andrey Konovalov <andreyknvl@google.com>
Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Link: https://lore.kernel.org/r/20210315132019.33202-9-vincenzo.frascino@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agoarm64: mte: Enable async tag check fault
Vincenzo Frascino [Mon, 15 Mar 2021 13:20:17 +0000 (13:20 +0000)]
arm64: mte: Enable async tag check fault

MTE provides a mode that asynchronously updates the TFSR_EL1 register
when a tag check exception is detected.

To take advantage of this mode the kernel has to verify the status of
the register at:
  1. Context switching
  2. Return to user/EL0 (Not required in entry from EL0 since the kernel
  did not run)
  3. Kernel entry from EL1
  4. Kernel exit to EL1

If the register is non-zero a trace is reported.

Add the required features for EL1 detection and reporting.

Note: ITFSB bit is set in the SCTLR_EL1 register hence it guaranties that
the indirect writes to TFSR_EL1 are synchronized at exception entry to
EL1. On the context switch path the synchronization is guarantied by the
dsb() in __switch_to().
The dsb(nsh) in mte_check_tfsr_exit() is provisional pending
confirmation by the architects.

Cc: Will Deacon <will@kernel.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Andrey Konovalov <andreyknvl@google.com>
Tested-by: Andrey Konovalov <andreyknvl@google.com>
Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Link: https://lore.kernel.org/r/20210315132019.33202-8-vincenzo.frascino@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agoarm64: mte: Conditionally compile mte_enable_kernel_*()
Vincenzo Frascino [Mon, 15 Mar 2021 13:20:16 +0000 (13:20 +0000)]
arm64: mte: Conditionally compile mte_enable_kernel_*()

mte_enable_kernel_*() are not needed if KASAN_HW is disabled.

Add ash defines around the functions to conditionally compile the
functions.

Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20210315132019.33202-7-vincenzo.frascino@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agoarm64: mte: Enable TCO in functions that can read beyond buffer limits
Vincenzo Frascino [Mon, 15 Mar 2021 13:20:15 +0000 (13:20 +0000)]
arm64: mte: Enable TCO in functions that can read beyond buffer limits

load_unaligned_zeropad() and __get/put_kernel_nofault() functions can
read past some buffer limits which may include some MTE granule with a
different tag.

When MTE async mode is enabled, the load operation crosses the boundaries
and the next granule has a different tag the PE sets the TFSR_EL1.TF1 bit
as if an asynchronous tag fault is happened.

Enable Tag Check Override (TCO) in these functions  before the load and
disable it afterwards to prevent this to happen.

Note: The same condition can be hit in MTE sync mode but we deal with it
through the exception handling.
In the current implementation, mte_async_mode flag is set only at boot
time but in future kasan might acquire some runtime features that
that change the mode dynamically, hence we disable it when sync mode is
selected for future proof.

Cc: Will Deacon <will@kernel.org>
Reported-by: Branislav Rankov <Branislav.Rankov@arm.com>
Tested-by: Branislav Rankov <Branislav.Rankov@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Andrey Konovalov <andreyknvl@google.com>
Tested-by: Andrey Konovalov <andreyknvl@google.com>
Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Link: https://lore.kernel.org/r/20210315132019.33202-6-vincenzo.frascino@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agokasan: Add report for async mode
Vincenzo Frascino [Mon, 15 Mar 2021 13:20:14 +0000 (13:20 +0000)]
kasan: Add report for async mode

KASAN provides an asynchronous mode of execution.

Add reporting functionality for this mode.

Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Andrey Konovalov <andreyknvl@google.com>
Reviewed-by: Andrey Konovalov <andreyknvl@google.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Andrey Konovalov <andreyknvl@google.com>
Tested-by: Andrey Konovalov <andreyknvl@google.com>
Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
Link: https://lore.kernel.org/r/20210315132019.33202-5-vincenzo.frascino@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agoarm64: mte: Drop arch_enable_tagging()
Vincenzo Frascino [Mon, 15 Mar 2021 13:20:13 +0000 (13:20 +0000)]
arm64: mte: Drop arch_enable_tagging()

arch_enable_tagging() was left in memory.h after the introduction of
async mode to not break the bysectability of the KASAN KUNIT tests.

Remove the function now that KASAN has been fully converted.

Cc: Will Deacon <will@kernel.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Andrey Konovalov <andreyknvl@google.com>
Tested-by: Andrey Konovalov <andreyknvl@google.com>
Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Link: https://lore.kernel.org/r/20210315132019.33202-4-vincenzo.frascino@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agokasan: Add KASAN mode kernel parameter
Vincenzo Frascino [Mon, 15 Mar 2021 13:20:12 +0000 (13:20 +0000)]
kasan: Add KASAN mode kernel parameter

Architectures supported by KASAN_HW_TAGS can provide a sync or async mode
of execution. On an MTE enabled arm64 hw for example this can be identified
with the synchronous or asynchronous tagging mode of execution.
In synchronous mode, an exception is triggered if a tag check fault occurs.
In asynchronous mode, if a tag check fault occurs, the TFSR_EL1 register is
updated asynchronously. The kernel checks the corresponding bits
periodically.

KASAN requires a specific kernel command line parameter to make use of this
hw features.

Add KASAN HW execution mode kernel command line parameter.

Note: This patch adds the kasan.mode kernel parameter and the
sync/async kernel command line options to enable the described features.

[ Add a new var instead of exposing kasan_arg_mode to be consistent with
  flags for other command line arguments. ]

Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Andrey Konovalov <andreyknvl@google.com>
Reviewed-by: Andrey Konovalov <andreyknvl@google.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Andrey Konovalov <andreyknvl@google.com>
Tested-by: Andrey Konovalov <andreyknvl@google.com>
Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
Link: https://lore.kernel.org/r/20210315132019.33202-3-vincenzo.frascino@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agoarm64: mte: Add asynchronous mode support
Vincenzo Frascino [Mon, 15 Mar 2021 13:20:11 +0000 (13:20 +0000)]
arm64: mte: Add asynchronous mode support

MTE provides an asynchronous mode for detecting tag exceptions. In
particular instead of triggering a fault the arm64 core updates a
register which is checked by the kernel after the asynchronous tag
check fault has occurred.

Add support for MTE asynchronous mode.

The exception handling mechanism will be added with a future patch.

Note: KASAN HW activates async mode via kasan.mode kernel parameter.
The default mode is set to synchronous.
The code that verifies the status of TFSR_EL1 will be added with a
future patch.

Cc: Will Deacon <will@kernel.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Andrey Konovalov <andreyknvl@google.com>
Acked-by: Andrey Konovalov <andreyknvl@google.com>
Tested-by: Andrey Konovalov <andreyknvl@google.com>
Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Link: https://lore.kernel.org/r/20210315132019.33202-2-vincenzo.frascino@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agoarm64: Get rid of CONFIG_ARM64_VHE
Marc Zyngier [Thu, 8 Apr 2021 13:10:10 +0000 (14:10 +0100)]
arm64: Get rid of CONFIG_ARM64_VHE

CONFIG_ARM64_VHE was introduced with ARMv8.1 (some 7 years ago),
and has been enabled by default for almost all that time.

Given that newer systems that are VHE capable are finally becoming
available, and that some systems are even incapable of not running VHE,
drop the configuration altogether.

Anyone willing to stick to non-VHE on VHE hardware for obscure
reasons should use the 'kvm-arm.mode=nvhe' command-line option.

Suggested-by: Will Deacon <will@kernel.org>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Acked-by: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20210408131010.1109027-4-maz@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agoarm64: Cope with CPUs stuck in VHE mode
Marc Zyngier [Thu, 8 Apr 2021 13:10:09 +0000 (14:10 +0100)]
arm64: Cope with CPUs stuck in VHE mode

It seems that the CPUs part of the SoC known as Apple M1 have the
terrible habit of being stuck with HCR_EL2.E2H==1, in violation
of the architecture.

Try and work around this deplorable state of affairs by detecting
the stuck bit early and short-circuit the nVHE dance. Additional
filtering code ensures that attempts at switching to nVHE from
the command-line are also ignored.

It is still unknown whether there are many more such nuggets
to be found...

Reported-by: Hector Martin <marcan@marcan.st>
Acked-by: Will Deacon <will@kernel.org>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20210408131010.1109027-3-maz@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agoarm64: cpufeature: Allow early filtering of feature override
Marc Zyngier [Thu, 8 Apr 2021 13:10:08 +0000 (14:10 +0100)]
arm64: cpufeature: Allow early filtering of feature override

Some CPUs are broken enough that some overrides need to be rejected
at the earliest opportunity. In some cases, that's right at cpu
feature override time.

Provide the necessary infrastructure to filter out overrides,
and to report such filtered out overrides to the core cpufeature code.

Acked-by: Will Deacon <will@kernel.org>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20210408131010.1109027-2-maz@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agoarm64: Require that system registers at all visible ELs be initialized
Mark Brown [Thu, 1 Apr 2021 18:09:41 +0000 (19:09 +0100)]
arm64: Require that system registers at all visible ELs be initialized

Currently we require that software at a higher exception level initialise
all registers at the exception level the kernel will be entered prior to
starting the kernel in order to ensure that there is nothing uninitialised
which could result in an UNKNOWN state while running the kernel. The
expectation is that the software running at the highest exception levels
will be tightly coupled to the system and can ensure that all available
features are appropriately initialised and that the kernel can initialise
anything else.

There is a gap here in the case where new registers are added to lower
exception levels that require initialisation but the kernel does not yet
understand them. Extend the requirement to also include exception levels
below the one where the kernel is entered to cover this.

Suggested-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
Acked-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20210401180942.35815-4-broonie@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agoarm64: Disable fine grained traps on boot
Mark Brown [Thu, 1 Apr 2021 18:09:40 +0000 (19:09 +0100)]
arm64: Disable fine grained traps on boot

The arm64 FEAT_FGT extension introduces a set of traps to EL2 for accesses
to small sets of registers and instructions from EL1 and EL0.  Currently
Linux makes no use of this feature, ensure that it is not active at boot by
disabling the traps during EL2 setup.

Signed-off-by: Mark Brown <broonie@kernel.org>
Reviewed-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20210401180942.35815-3-broonie@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agoarm64: Document requirements for fine grained traps at boot
Mark Brown [Thu, 1 Apr 2021 18:09:39 +0000 (19:09 +0100)]
arm64: Document requirements for fine grained traps at boot

The arm64 FEAT_FGT extension introduces a set of traps to EL2 for accesses
to small sets of registers and instructions from EL1 and EL0, access to
which is controlled by EL3.  Require access to it so that it is
available to us in future and so that we can ensure these traps are
disabled during boot.

Signed-off-by: Mark Brown <broonie@kernel.org>
Acked-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20210401180942.35815-2-broonie@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agoarm64: mte: Remove unused mte_assign_mem_tag_range()
Vincenzo Frascino [Wed, 7 Apr 2021 13:38:17 +0000 (14:38 +0100)]
arm64: mte: Remove unused mte_assign_mem_tag_range()

mte_assign_mem_tag_range() was added in commit 85f49cae4dfc
("arm64: mte: add in-kernel MTE helpers") in 5.11 but moved out of
mte.S by commit 2cb34276427a ("arm64: kasan: simplify and inline
MTE functions") in 5.12 and renamed to mte_set_mem_tag_range().
2cb34276427a did not delete the old function prototypes in mte.h.

Remove the unused prototype from mte.h.

Cc: Will Deacon <will@kernel.org>
Reported-by: Derrick McKee <derrick.mckee@gmail.com>
Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Link: https://lore.kernel.org/r/20210407133817.23053-1-vincenzo.frascino@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agoarm64: Add __init section marker to some functions
Jisheng Zhang [Tue, 30 Mar 2021 05:54:49 +0000 (13:54 +0800)]
arm64: Add __init section marker to some functions

They are not needed after booting, so mark them as __init to move them
to the .init section.

Signed-off-by: Jisheng Zhang <Jisheng.Zhang@synaptics.com>
Reviewed-by: Steven Price <steven.price@arm.com>
Link: https://lore.kernel.org/r/20210330135449.4dcffd7f@xhacker.debian
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agoarm64/sve: Rework SVE access trap to convert state in registers
Mark Brown [Fri, 12 Mar 2021 19:03:13 +0000 (19:03 +0000)]
arm64/sve: Rework SVE access trap to convert state in registers

When we enable SVE usage in userspace after taking a SVE access trap we
need to ensure that the portions of the register state that are not
shared with the FPSIMD registers are zeroed. Currently we do this by
forcing the FPSIMD registers to be saved to the task struct and converting
them there. This is wasteful in the common case where the task state is
loaded into the registers and we will immediately return to userspace
since we can initialise the SVE state directly in registers instead of
accessing multiple copies of the register state in memory.

Instead in that common case do the conversion in the registers and
update the task metadata so that we can return to userspace without
spilling the register state to memory unless there is some other reason
to do so.

Signed-off-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20210312190313.24598-1-broonie@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agoarm64: perf: Remove redundant initialization in perf_event.c
Qi Liu [Thu, 1 Apr 2021 11:16:41 +0000 (19:16 +0800)]
arm64: perf: Remove redundant initialization in perf_event.c

The initialization of value in function armv8pmu_read_hw_counter()
and armv8pmu_read_counter() seem redundant, as they are soon updated.
So, We can remove them.

Signed-off-by: Qi Liu <liuqi115@huawei.com>
Link: https://lore.kernel.org/r/1617275801-1980-1-git-send-email-liuqi115@huawei.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoperf/arm_pmu_platform: Clean up with dev_printk
Robin Murphy [Fri, 26 Mar 2021 16:02:42 +0000 (16:02 +0000)]
perf/arm_pmu_platform: Clean up with dev_printk

Nearly all of the messages we can log from the platform device code
relate to the specific PMU device and the properties we're parsing from
its DT node. In some cases we use %pOF to point at where something was
wrong, but even that is inconsistent. Let's convert these logs to the
appropriate dev_printk variants, so that every issue specific to the
device and/or its DT description is clearly and instantly attributable,
particularly if there is more than one PMU node present in the DT.

The local refactoring in a couple of functions invites some extra
cleanup in the process - the init_fn matching can be streamlined, and
the PMU registration failure message moved to the appropriate place and
log level.

CC: Tian Tao <tiantao6@hisilicon.com>
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Link: https://lore.kernel.org/r/10a4aacdf071d0c03d061c408a5899e5b32cc0a6.1616774562.git.robin.murphy@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoperf/arm_pmu_platform: Fix error handling
Robin Murphy [Fri, 26 Mar 2021 16:02:41 +0000 (16:02 +0000)]
perf/arm_pmu_platform: Fix error handling

If we're aborting after failing to register the PMU device,
we probably don't want to leak the IRQs that we've claimed.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Link: https://lore.kernel.org/r/53031a607fc8412a60024bfb3bb8cd7141f998f5.1616774562.git.robin.murphy@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoperf/arm_pmu_platform: Use dev_err_probe() for IRQ errors
Robin Murphy [Fri, 26 Mar 2021 16:02:40 +0000 (16:02 +0000)]
perf/arm_pmu_platform: Use dev_err_probe() for IRQ errors

By virtue of using platform_irq_get_optional() under the covers,
platform_irq_count() needs the target interrupt controller to be
available and may return -EPROBE_DEFER if it isn't. Let's use
dev_err_probe() to avoid a spurious error log (and help debug any
deferral issues) in that case.

Reported-by: Paul Menzel <pmenzel@molgen.mpg.de>
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Link: https://lore.kernel.org/r/073d5e0d3ed1f040592cb47ca6fe3759f40cc7d1.1616774562.git.robin.murphy@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agodocs: perf: Address some html build warnings
Qi Liu [Mon, 29 Mar 2021 12:32:01 +0000 (20:32 +0800)]
docs: perf: Address some html build warnings

Fix following html build warnings:
Documentation/admin-guide/perf/hisi-pmu.rst:61: WARNING: Unexpected indentation.
Documentation/admin-guide/perf/hisi-pmu.rst:62: WARNING: Block quote ends without a blank line; unexpected unindent.
Documentation/admin-guide/perf/hisi-pmu.rst:69: WARNING: Unexpected indentation.
Documentation/admin-guide/perf/hisi-pmu.rst:70: WARNING: Block quote ends without a blank line; unexpected unindent.
Documentation/admin-guide/perf/hisi-pmu.rst:83: WARNING: Unexpected indentation.

Fixes: 9b86b1b41e0f ("docs: perf: Add new description on HiSilicon uncore PMU v2")
Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Qi Liu <liuqi115@huawei.com>
Link: https://lore.kernel.org/r/1617021121-31450-1-git-send-email-liuqi115@huawei.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agodocs: arm64: Fix a grammar error
He Ying [Tue, 30 Mar 2021 08:58:17 +0000 (04:58 -0400)]
docs: arm64: Fix a grammar error

depending -> depending on

Reported-by: Hulk Robot <hulkci@huawei.com>
Signed-off-by: He Ying <heying24@huawei.com>
Link: https://lore.kernel.org/r/20210330085817.86185-1-heying24@huawei.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agoarm64: Kconfig: select KASAN_VMALLOC if KANSAN_GENERIC is enabled
Lecopzer Chen [Wed, 24 Mar 2021 04:05:22 +0000 (12:05 +0800)]
arm64: Kconfig: select KASAN_VMALLOC if KANSAN_GENERIC is enabled

Before this patch, someone who wants to use VMAP_STACK when
KASAN_GENERIC enabled must explicitly select KASAN_VMALLOC.

>From Will's suggestion [1]:
  > I would _really_ like to move to VMAP stack unconditionally, and
  > that would effectively force KASAN_VMALLOC to be set if KASAN is in use

Because VMAP_STACK now depends on either HW_TAGS or KASAN_VMALLOC if
KASAN enabled, in order to make VMAP_STACK selected unconditionally,
we bind KANSAN_GENERIC and KASAN_VMALLOC together.

Note that SW_TAGS supports neither VMAP_STACK nor KASAN_VMALLOC now,
so this is the first step to make VMAP_STACK selected unconditionally.

Bind KANSAN_GENERIC and KASAN_VMALLOC together is supposed to cost more
memory at runtime, thus the alternative is using SW_TAGS KASAN instead.

[1]: https://lore.kernel.org/lkml/20210204150100.GE20815@willie-the-truck/

Suggested-by: Will Deacon <will@kernel.org>
Signed-off-by: Lecopzer Chen <lecopzer.chen@mediatek.com>
Link: https://lore.kernel.org/r/20210324040522.15548-6-lecopzer.chen@mediatek.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agoarm64: kaslr: support randomized module area with KASAN_VMALLOC
Lecopzer Chen [Wed, 24 Mar 2021 04:05:21 +0000 (12:05 +0800)]
arm64: kaslr: support randomized module area with KASAN_VMALLOC

After KASAN_VMALLOC works in arm64, we can randomize module region
into vmalloc area now.

Test:
VMALLOC area ffffffc010000000 fffffffdf0000000

before the patch:
module_alloc_base/end ffffffc008b80000 ffffffc010000000
after the patch:
module_alloc_base/end ffffffdcf4bed000 ffffffc010000000

And the function that insmod some modules is fine.

Suggested-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Lecopzer Chen <lecopzer.chen@mediatek.com>
Link: https://lore.kernel.org/r/20210324040522.15548-5-lecopzer.chen@mediatek.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agoarm64: Kconfig: support CONFIG_KASAN_VMALLOC
Lecopzer Chen [Wed, 24 Mar 2021 04:05:20 +0000 (12:05 +0800)]
arm64: Kconfig: support CONFIG_KASAN_VMALLOC

We can backed shadow memory in vmalloc area after vmalloc area
isn't populated at kasan_init(), thus make KASAN_VMALLOC selectable.

Signed-off-by: Lecopzer Chen <lecopzer.chen@mediatek.com>
Acked-by: Andrey Konovalov <andreyknvl@gmail.com>
Tested-by: Andrey Konovalov <andreyknvl@gmail.com>
Tested-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20210324040522.15548-4-lecopzer.chen@mediatek.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agoarm64: kasan: abstract _text and _end to KERNEL_START/END
Lecopzer Chen [Wed, 24 Mar 2021 04:05:19 +0000 (12:05 +0800)]
arm64: kasan: abstract _text and _end to KERNEL_START/END

Arm64 provides defined macro for KERNEL_START and KERNEL_END,
thus replace them by the abstration instead of using _text and _end.

Signed-off-by: Lecopzer Chen <lecopzer.chen@mediatek.com>
Acked-by: Andrey Konovalov <andreyknvl@gmail.com>
Tested-by: Andrey Konovalov <andreyknvl@gmail.com>
Tested-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20210324040522.15548-3-lecopzer.chen@mediatek.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agoarm64: kasan: don't populate vmalloc area for CONFIG_KASAN_VMALLOC
Lecopzer Chen [Wed, 24 Mar 2021 04:05:18 +0000 (12:05 +0800)]
arm64: kasan: don't populate vmalloc area for CONFIG_KASAN_VMALLOC

Linux support KAsan for VMALLOC since commit 3c5c3cfb9ef4da9
("kasan: support backing vmalloc space with real shadow memory")

Like how the MODULES_VADDR does now, just not to early populate
the VMALLOC_START between VMALLOC_END.

Before:

MODULE_VADDR: no mapping, no zero shadow at init
VMALLOC_VADDR: backed with zero shadow at init

After:

MODULE_VADDR: no mapping, no zero shadow at init
VMALLOC_VADDR: no mapping, no zero shadow at init

Thus the mapping will get allocated on demand by the core function
of KASAN_VMALLOC.

  -----------  vmalloc_shadow_start
 |           |
 |           |
 |           | <= non-mapping
 |           |
 |           |
 |-----------|
 |///////////|<- kimage shadow with page table mapping.
 |-----------|
 |           |
 |           | <= non-mapping
 |           |
 ------------- vmalloc_shadow_end
 |00000000000|
 |00000000000| <= Zero shadow
 |00000000000|
 ------------- KASAN_SHADOW_END

Signed-off-by: Lecopzer Chen <lecopzer.chen@mediatek.com>
Acked-by: Andrey Konovalov <andreyknvl@gmail.com>
Tested-by: Andrey Konovalov <andreyknvl@gmail.com>
Tested-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20210324040522.15548-2-lecopzer.chen@mediatek.com
[catalin.marinas@arm.com: add a build check on VMALLOC_START != MODULES_END]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agoarm64: smp: Add missing prototype for some smp.c functions
Chen Lifu [Mon, 29 Mar 2021 03:43:43 +0000 (11:43 +0800)]
arm64: smp: Add missing prototype for some smp.c functions

In commit eb631bb5bf5b
("arm64: Support arch_irq_work_raise() via self IPIs") a new
function "arch_irq_work_raise" was added without a prototype.

In commit d914d4d49745
("arm64: Implement panic_smp_self_stop()") a new
function "panic_smp_self_stop" was added without a prototype.

We get the following warnings on W=1:
arch/arm64/kernel/smp.c:842:6: warning: no previous prototype
for â€˜arch_irq_work_raise’ [-Wmissing-prototypes]
arch/arm64/kernel/smp.c:862:6: warning: no previous prototype
for â€˜panic_smp_self_stop’ [-Wmissing-prototypes]

Fix the warnings by:
1. Adding the prototype for 'arch_irq_work_raise' in irq_work.h
2. Adding the prototype for 'panic_smp_self_stop' in smp.h

Signed-off-by: Chen Lifu <chenlifu@huawei.com>
Link: https://lore.kernel.org/r/20210329034343.183974-1-chenlifu@huawei.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agoarm64: setup: name `tcr` register
Mark Rutland [Fri, 26 Mar 2021 18:01:37 +0000 (18:01 +0000)]
arm64: setup: name `tcr` register

In __cpu_setup we conditionally manipulate the TCR_EL1 value in x10
after previously using x10 as a scratch register for unrelated temporary
variables.

To make this a bit clearer, let's move the TCR_EL1 value into a named
register `tcr`. To simplify the register allocation, this is placed in
the highest available caller-saved scratch register, tcr.

Following the example of `mair`, we initialise the register with the
default value prior to any feature discovery, and write it to MAIR_EL1
after all feature discovery is complete, which allows us to simplify the
featuere discovery code.

The existing `mte_tcr` register is no longer needed, and is replaced by
the use of x10 as a temporary, matching the rest of the MTE feature
discovery assembly in __cpu_setup. As x20 is no longer used, the
function is now AAPCS compliant, as we've generally aimed for in our
assembly functions.

There should be no functional change as as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20210326180137.43119-3-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agoarm64: setup: name `mair` register
Mark Rutland [Fri, 26 Mar 2021 18:01:36 +0000 (18:01 +0000)]
arm64: setup: name `mair` register

In __cpu_setup we conditionally manipulate the MAIR_EL1 value in x5
before later reusing x5 as a scratch register for unrelated temporary
variables.

To make this a bit clearer, let's move the MAIR_EL1 value into a named
register `mair`. To simplify the register allocation, this is placed in
the highest available caller-saved scratch register, x17. As it is no
longer clobbered by other usage, we can write the value to MAIR_EL1 at
the end of the function as we do for TCR_EL1 rather than part-way though
feature discovery.

There should be no functional change as as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20210326180137.43119-2-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agoarm64: stacktrace: Move start_backtrace() out of the header
Mark Brown [Fri, 19 Mar 2021 17:40:22 +0000 (17:40 +0000)]
arm64: stacktrace: Move start_backtrace() out of the header

Currently start_backtrace() is a static inline function in the header.
Since it really shouldn't be sufficiently performance critical that we
actually need to have it inlined move it into a C file, this will save
anyone else scratching their head about why it is defined in the header.
As far as I can see it's only there because it was factored out of the
various callers.

Signed-off-by: Mark Brown <broonie@kernel.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20210319174022.33051-1-broonie@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agoarm64: Support execute-only permissions with Enhanced PAN
Vladimir Murzin [Fri, 12 Mar 2021 17:38:10 +0000 (17:38 +0000)]
arm64: Support execute-only permissions with Enhanced PAN

Enhanced Privileged Access Never (EPAN) allows Privileged Access Never
to be used with Execute-only mappings.

Absence of such support was a reason for 24cecc377463 ("arm64: Revert
support for execute-only user mappings"). Thus now it can be revisited
and re-enabled.

Cc: Kees Cook <keescook@chromium.org>
Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com>
Acked-by: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20210312173811.58284-2-vladimir.murzin@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agoarm64: barrier: Remove spec_bar() macro
Linus Walleij [Thu, 25 Mar 2021 14:13:04 +0000 (15:13 +0100)]
arm64: barrier: Remove spec_bar() macro

The spec_bar() macro was introduced in
commit bd4fb6d270bc ("arm64: Add support for SB barrier and patch in over DSB; ISB sequences")
as a way for C to insert a speculation barrier and was then
used in one single place: set_fs().

Later on
commit 3d2403fd10a1 ("arm64: uaccess: remove set_fs()")
deleted set_fs() altogether and as noted in the commit
on the new path the regular sb() assembly macro will
be used.

Delete the remnant.

Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20210325141304.1607595-1-linus.walleij@linaro.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agodocs: perf: Add new description on HiSilicon uncore PMU v2
Shaokun Zhang [Mon, 8 Mar 2021 06:50:37 +0000 (14:50 +0800)]
docs: perf: Add new description on HiSilicon uncore PMU v2

Some news functions are added on HiSilicon uncore PMUs. Document them
to provide guidance on how to use them.

Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: John Garry <john.garry@huawei.com>
Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Reviewed-by: John Garry <john.garry@huawei.com>
Co-developed-by: Qi Liu <liuqi115@huawei.com>
Signed-off-by: Qi Liu <liuqi115@huawei.com>
Signed-off-by: Shaokun Zhang <zhangshaokun@hisilicon.com>
Link: https://lore.kernel.org/r/1615186237-22263-10-git-send-email-zhangshaokun@hisilicon.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agodrivers/perf: hisi: Add support for HiSilicon PA PMU driver
Shaokun Zhang [Mon, 8 Mar 2021 06:50:36 +0000 (14:50 +0800)]
drivers/perf: hisi: Add support for HiSilicon PA PMU driver

On HiSilicon Hip09 platform, there is a PA (Protocol Adapter) module on
each chip SICL (Super I/O Cluster) which incorporates three Hydra interface
and facilitates the cache coherency between the dies on the chip. While PA
uncore PMU model is the same as other Hip09 PMU modules and many PMU events
are supported. Let's support the PMU driver using the HiSilicon uncore PMU
framework.

PA PMU supports the following filter functions:
* tracetag_en: allows user to count events according to tt_req or
tt_core set in L3C PMU. It's the same as other PMUs.

* srcid_cmd & srcid_msk: allows user to filter statistics that come from
specific CCL/ICL by configuration source ID.

* tgtid_cmd & tgtid_msk: it is the similar function to srcid_cmd &
srcid_msk. Both are used to check where the data comes from or go to.

Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: John Garry <john.garry@huawei.com>
Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Reviewed-by: John Garry <john.garry@huawei.com>
Co-developed-by: Qi Liu <liuqi115@huawei.com>
Signed-off-by: Qi Liu <liuqi115@huawei.com>
Signed-off-by: Shaokun Zhang <zhangshaokun@hisilicon.com>
Link: https://lore.kernel.org/r/1615186237-22263-9-git-send-email-zhangshaokun@hisilicon.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agodrivers/perf: hisi: Add support for HiSilicon SLLC PMU driver
Shaokun Zhang [Mon, 8 Mar 2021 06:50:35 +0000 (14:50 +0800)]
drivers/perf: hisi: Add support for HiSilicon SLLC PMU driver

HiSilicon's Hip09 is comprised by multi-dies that can be connected by SLLC
module (Skyros Link Layer Controller), its has separate PMU registers which
the driver can program it freely and interrupt is supported to handle
counter overflow. Let's support its driver under the framework of HiSilicon
uncore PMU driver.

SLLC PMU supports the following filter functions:
* tracetag_en: allows user to count data according to tt_req or
tt_core set in L3C PMU.

* srcid_cmd & srcid_msk: allows user to filter statistics that come from
specific CCL/ICL by configuration source ID.

* tgtid_hi & tgtid_lo: it also supports event statistics that these
operations will go to the CCL/ICL by configuration target ID or
target ID range. It's the same as source ID with 11-bit width in
the SoC. More introduction is added in documentation:
Documentation/admin-guide/perf/hisi-pmu.rst

Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: John Garry <john.garry@huawei.com>
Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Reviewed-by: John Garry <john.garry@huawei.com>
Co-developed-by: Qi Liu <liuqi115@huawei.com>
Signed-off-by: Qi Liu <liuqi115@huawei.com>
Signed-off-by: Shaokun Zhang <zhangshaokun@hisilicon.com>
Link: https://lore.kernel.org/r/1615186237-22263-8-git-send-email-zhangshaokun@hisilicon.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agodrivers/perf: hisi: Update DDRC PMU for programmable counter
Shaokun Zhang [Mon, 8 Mar 2021 06:50:34 +0000 (14:50 +0800)]
drivers/perf: hisi: Update DDRC PMU for programmable counter

DDRC PMU's events are useful for performance profiling, but the events
are limited and counter is fixed. On HiSilicon Hip09 platform, PMU
counters are the programmable and more events are supported. Let's
add the DDRC PMU v2 driver.

Bandwidth events are exposed directly in driver and some more events
will listed in JSON file later.

Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: John Garry <john.garry@huawei.com>
Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Reviewed-by: John Garry <john.garry@huawei.com>
Co-developed-by: Qi Liu <liuqi115@huawei.com>
Signed-off-by: Qi Liu <liuqi115@huawei.com>
Signed-off-by: Shaokun Zhang <zhangshaokun@hisilicon.com>
Link: https://lore.kernel.org/r/1615186237-22263-7-git-send-email-zhangshaokun@hisilicon.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agodrivers/perf: hisi: Add new functions for HHA PMU
Shaokun Zhang [Mon, 8 Mar 2021 06:50:33 +0000 (14:50 +0800)]
drivers/perf: hisi: Add new functions for HHA PMU

On HiSilicon Hip09 platform, some new functions are also supported on
HHA PMU.

* tracetag_en: it is the abbreviation of tracetag enable and allows user
to count events according to tt_req or tt_core set in L3C PMU.

* datasrc_skt: it is the abbreviation of data source from another
socket and it is used in the multi-chips. It's the same as L3C PMU.

* srcid_cmd & srcid_msk: pair of the fields are used to filter
statistics that come from the specific CCL/ICL by the configuration.
These are the abbreviation of source ID command and mask. The source
ID is 11-bit and detailed descriptions are documented in
Documentation/admin-guide/perf/hisi-pmu.rst.

Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: John Garry <john.garry@huawei.com>
Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Reviewed-by: John Garry <john.garry@huawei.com>
Co-developed-by: Qi Liu <liuqi115@huawei.com>
Signed-off-by: Qi Liu <liuqi115@huawei.com>
Signed-off-by: Shaokun Zhang <zhangshaokun@hisilicon.com>
Link: https://lore.kernel.org/r/1615186237-22263-6-git-send-email-zhangshaokun@hisilicon.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agodrivers/perf: hisi: Add new functions for L3C PMU
Shaokun Zhang [Mon, 8 Mar 2021 06:50:32 +0000 (14:50 +0800)]
drivers/perf: hisi: Add new functions for L3C PMU

On HiSilicon Hip09 platform, some new functions are enhanced on L3C PMU:

* tt_req: it is the abbreviation of tracetag request and allows user to
count only read/write/atomic operations. tt_req is 3-bit and details are
listed in the hisi-pmu document.
$# perf stat -a -e hisi_sccl3_l3c0/config=0x02,tt_req=0x4/ sleep 5

* tt_core: it is the abbreviation of tracetag core and allows user to
filter by core/thread within the cluster, it is a 8-bit bitmap that each
bit represents the corresponding core/thread in this L3C.
$# perf stat -a -e hisi_sccl3_l3c0/config=0x02,tt_core=0xf/ sleep 5

* datasrc_cfg: it is the abbreviation of data source configuration and
allows user to check where the data comes from, such as: from local DDR,
cross-die DDR or cross-socket DDR. Its is 5-bit and represents different
data source in the SoC.
$# perf stat -a -e hisi_sccl3_l3c0/dat_access,datasrc_cfg=0xe/ sleep 5

* datasrc_skt: it is the abbreviation of data source from another socket
and is used in the multi-chips, if user wants to check the cross-socket
datat source, it shall be added in perf command. Only one bit is used to
control this.
$# perf stat -a -e hisi_sccl3_l3c0/dat_access,datasrc_cfg=0x10,datasrc_skt=1/ sleep 5

Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: John Garry <john.garry@huawei.com>
Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Reviewed-by: John Garry <john.garry@huawei.com>
Co-developed-by: Qi Liu <liuqi115@huawei.com>
Signed-off-by: Qi Liu <liuqi115@huawei.com>
Signed-off-by: Shaokun Zhang <zhangshaokun@hisilicon.com>
Link: https://lore.kernel.org/r/1615186237-22263-5-git-send-email-zhangshaokun@hisilicon.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agodrivers/perf: hisi: Add PMU version for uncore PMU drivers.
Shaokun Zhang [Mon, 8 Mar 2021 06:50:31 +0000 (14:50 +0800)]
drivers/perf: hisi: Add PMU version for uncore PMU drivers.

For HiSilicon uncore PMU, more versions are supported and some variables
shall be added suffix to distinguish the version which are prepared for
the new drivers.

Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: John Garry <john.garry@huawei.com>
Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: John Garry <john.garry@huawei.com>
Co-developed-by: Qi Liu <liuqi115@huawei.com>
Signed-off-by: Qi Liu <liuqi115@huawei.com>
Signed-off-by: Shaokun Zhang <zhangshaokun@hisilicon.com>
Link: https://lore.kernel.org/r/1615186237-22263-4-git-send-email-zhangshaokun@hisilicon.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agodrivers/perf: hisi: Refactor code for more uncore PMUs
Shaokun Zhang [Mon, 8 Mar 2021 06:50:30 +0000 (14:50 +0800)]
drivers/perf: hisi: Refactor code for more uncore PMUs

On HiSilicon uncore PMU drivers, interrupt handling function and interrupt
registration function are very similar in differents PMU modules. Let's
refactor the frame.

Two new callbacks are added for the HW accessors:

* hisi_uncore_ops::get_int_status returns a bitmap of events which
  have overflowed and raised an interrupt

* hisi_uncore_ops::clear_int_status clears the overflow status for a
  specific event

These callback functions are used by a common IRQ handler,
hisi_uncore_pmu_isr().

One more function hisi_uncore_pmu_init_irq() is added to replace each
PMU initialization IRQ interface and simplify the code.

Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: John Garry <john.garry@huawei.com>
Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Reviewed-by: John Garry <john.garry@huawei.com>
Co-developed-by: Qi Liu <liuqi115@huawei.com>
Signed-off-by: Qi Liu <liuqi115@huawei.com>
Signed-off-by: Shaokun Zhang <zhangshaokun@hisilicon.com>
Link: https://lore.kernel.org/r/1615186237-22263-3-git-send-email-zhangshaokun@hisilicon.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agodrivers/perf: hisi: Remove unnecessary check of counter index
Shaokun Zhang [Mon, 8 Mar 2021 06:50:29 +0000 (14:50 +0800)]
drivers/perf: hisi: Remove unnecessary check of counter index

The sanity check for counter index has been done in the function
hisi_uncore_pmu_get_event_idx, so remove the redundant interface
hisi_uncore_pmu_counter_valid() and sanity check.

Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: John Garry <john.garry@huawei.com>
Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Co-developed-by: Qi Liu <liuqi115@huawei.com>
Signed-off-by: Qi Liu <liuqi115@huawei.com>
Signed-off-by: Shaokun Zhang <zhangshaokun@hisilicon.com>
Link: https://lore.kernel.org/r/1615186237-22263-2-git-send-email-zhangshaokun@hisilicon.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agodrivers/perf: Simplify the SMMUv3 PMU event attributes
Qi Liu [Mon, 8 Feb 2021 13:04:58 +0000 (21:04 +0800)]
drivers/perf: Simplify the SMMUv3 PMU event attributes

For each PMU event, there is a SMMU_EVENT_ATTR(xx, XX) and
&smmu_event_attr_xx.attr.attr. Let's redefine the SMMU_EVENT_ATTR
to simplify the smmu_pmu_events.

Signed-off-by: Qi Liu <liuqi115@huawei.com>
Link: https://lore.kernel.org/r/1612789498-12957-1-git-send-email-liuqi115@huawei.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: entry: remove test_irqs_unmasked macro
Mark Rutland [Tue, 23 Mar 2021 18:12:01 +0000 (18:12 +0000)]
arm64: entry: remove test_irqs_unmasked macro

We haven't needed the test_irqs_unmasked macro since commit:

  105fc3352077bba5 ("arm64: entry: move el1 irq/nmi logic to C")

... and as we convert more of the entry logic to C it is decreasingly
likely we'll need it in future, so let's remove the unused macro.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Will Deacon <will@kernel.org>
Acked-by: Marc Zyngier <maz@kernel.org>
Acked-by: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20210323181201.18889-1-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agodrivers/perf: convert sysfs sprintf family to sysfs_emit
Qi Liu [Fri, 19 Mar 2021 10:04:33 +0000 (18:04 +0800)]
drivers/perf: convert sysfs sprintf family to sysfs_emit

sprintf does not know the PAGE_SIZE maximum of the temporary buffer
used for sysfs content and it's possible to overrun the buffer length.

Use sysfs_emit() function to ensures that no overrun is done.

Signed-off-by: Qi Liu <liuqi115@huawei.com>
Link: https://lore.kernel.org/r/1616148273-16374-4-git-send-email-liuqi115@huawei.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agodrivers/perf: convert sysfs scnprintf family to sysfs_emit_at() and sysfs_emit()
Qi Liu [Fri, 19 Mar 2021 10:04:32 +0000 (18:04 +0800)]
drivers/perf: convert sysfs scnprintf family to sysfs_emit_at() and sysfs_emit()

Use the generic sysfs_emit_at() and sysfs_emit() function to take place
of scnprintf()

Signed-off-by: Qi Liu <liuqi115@huawei.com>
Link: https://lore.kernel.org/r/1616148273-16374-3-git-send-email-liuqi115@huawei.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agodrivers/perf: convert sysfs snprintf family to sysfs_emit
Zihao Tang [Fri, 19 Mar 2021 10:04:31 +0000 (18:04 +0800)]
drivers/perf: convert sysfs snprintf family to sysfs_emit

Fix the following coccicheck warning:

./drivers/perf/hisilicon/hisi_uncore_pmu.c:128:8-16: WARNING: use scnprintf or sprintf.
./drivers/perf/fsl_imx8_ddr_perf.c:173:8-16: WARNING: use scnprintf or sprintf.
./drivers/perf/arm_spe_pmu.c:129:8-16: WARNING: use scnprintf or sprintf.
./drivers/perf/arm_smmu_pmu.c:563:8-16: WARNING: use scnprintf or sprintf.
./drivers/perf/arm_dsu_pmu.c:149:8-16: WARNING: use scnprintf or sprintf.
./drivers/perf/arm_dsu_pmu.c:139:8-16: WARNING: use scnprintf or sprintf.
./drivers/perf/arm-cmn.c:563:8-16: WARNING: use scnprintf or sprintf.
./drivers/perf/arm-cmn.c:351:8-16: WARNING: use scnprintf or sprintf.
./drivers/perf/arm-ccn.c:224:8-16: WARNING: use scnprintf or sprintf.
./drivers/perf/arm-cci.c:708:8-16: WARNING: use scnprintf or sprintf.
./drivers/perf/arm-cci.c:699:8-16: WARNING: use scnprintf or sprintf.
./drivers/perf/arm-cci.c:528:8-16: WARNING: use scnprintf or sprintf.
./drivers/perf/arm-cci.c:309:8-16: WARNING: use scnprintf or sprintf.

Signed-off-by: Zihao Tang <tangzihao1@hisilicon.com>
Signed-off-by: Qi Liu <liuqi115@huawei.com>
Link: https://lore.kernel.org/r/1616148273-16374-2-git-send-email-liuqi115@huawei.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: irq: allow FIQs to be handled
Mark Rutland [Mon, 15 Mar 2021 11:56:29 +0000 (11:56 +0000)]
arm64: irq: allow FIQs to be handled

On contemporary platforms we don't use FIQ, and treat any stray FIQ as a
fatal event. However, some platforms have an interrupt controller wired
to FIQ, and need to handle FIQ as part of regular operation.

So that we can support both cases dynamically, this patch updates the
FIQ exception handling code to operate the same way as the IRQ handling
code, with its own handle_arch_fiq handler. Where a root FIQ handler is
not registered, an unexpected FIQ exception will trigger the default FIQ
handler, which will panic() as today. Where a root FIQ handler is
registered, handling of the FIQ is deferred to that handler.

As el0_fiq_invalid_compat is supplanted by el0_fiq, the former is
removed. For !CONFIG_COMPAT builds we never expect to take an exception
from AArch32 EL0, so we keep the common el0_fiq_invalid handler.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Hector Martin <marcan@marcan.st>
Cc: James Morse <james.morse@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will@kernel.org>
Acked-by: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20210315115629.57191-7-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agoarm64: Always keep DAIF.[IF] in sync
Hector Martin [Mon, 15 Mar 2021 11:56:28 +0000 (11:56 +0000)]
arm64: Always keep DAIF.[IF] in sync

Apple SoCs (A11 and newer) have some interrupt sources hardwired to the
FIQ line. We implement support for this by simply treating IRQs and FIQs
the same way in the interrupt vectors.

To support these systems, the FIQ mask bit needs to be kept in sync with
the IRQ mask bit, so both kinds of exceptions are masked together. No
other platforms should be delivering FIQ exceptions right now, and we
already unmask FIQ in normal process context, so this should not have an
effect on other systems - if spurious FIQs were arriving, they would
already panic the kernel.

Signed-off-by: Hector Martin <marcan@marcan.st>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Hector Martin <marcan@marcan.st>
Cc: James Morse <james.morse@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will@kernel.org>
Acked-by: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20210315115629.57191-6-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agoarm64: entry: factor irq triage logic into macros
Marc Zyngier [Mon, 15 Mar 2021 11:56:27 +0000 (11:56 +0000)]
arm64: entry: factor irq triage logic into macros

In subsequent patches we'll allow an FIQ handler to be registered, and
FIQ exceptions will need to be triaged very similarly to IRQ exceptions.
So that we can reuse the existing logic, this patch factors the IRQ
triage logic out into macros that can be reused for FIQ.

The macros are named to follow the elX_foo_handler scheme used by the C
exception handlers. For consistency with other top-level exception
handlers, the kernel_entry/kernel_exit logic is not moved into the
macros. As FIQ will use a different C handler, this handler name is
provided as an argument to the macros.

There should be no functional change as a result of this patch.

Signed-off-by: Marc Zyngier <maz@kernel.org>
[Mark: rework macros, commit message, rebase before DAIF rework]
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Hector Martin <marcan@marcan.st>
Cc: James Morse <james.morse@arm.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will@kernel.org>
Acked-by: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20210315115629.57191-5-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agoarm64: irq: rework root IRQ handler registration
Mark Rutland [Mon, 15 Mar 2021 11:56:26 +0000 (11:56 +0000)]
arm64: irq: rework root IRQ handler registration

If we accidentally unmask IRQs before we've registered a root IRQ
handler, handle_arch_irq will be NULL, and the IRQ exception handler
will branch to a bogus address.

To make this easier to debug, this patch initialises handle_arch_irq to
a default handler which will panic(), making such problems easier to
debug. When we add support for FIQ handlers, we can follow the same
approach.

When we add support for a root FIQ handler, it's possible to have root
IRQ handler without an root FIQ handler, and in theory the inverse is
also possible. To permit this, and to keep the IRQ/FIQ registration
logic similar, this patch removes the panic in the absence of a root IRQ
controller. Instead, set_handle_irq() logs when a handler is registered,
which is sufficient for debug purposes.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Hector Martin <marcan@marcan.st>
Cc: James Morse <james.morse@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will@kernel.org>
Acked-by: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20210315115629.57191-4-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agoarm64: don't use GENERIC_IRQ_MULTI_HANDLER
Marc Zyngier [Mon, 15 Mar 2021 11:56:25 +0000 (11:56 +0000)]
arm64: don't use GENERIC_IRQ_MULTI_HANDLER

In subsequent patches we want to allow irqchip drivers to register as
FIQ handlers, with a set_handle_fiq() function. To keep the IRQ/FIQ
paths similar, we want arm64 to provide both set_handle_irq() and
set_handle_fiq(), rather than using GENERIC_IRQ_MULTI_HANDLER for the
former.

This patch adds an arm64-specific implementation of set_handle_irq().
There should be no functional change as a result of this patch.

Signed-off-by: Marc Zyngier <maz@kernel.org>
[Mark: use a single handler pointer]
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Hector Martin <marcan@marcan.st>
Cc: James Morse <james.morse@arm.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20210315115629.57191-3-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agogenirq: Allow architectures to override set_handle_irq() fallback
Marc Zyngier [Mon, 15 Mar 2021 11:56:24 +0000 (11:56 +0000)]
genirq: Allow architectures to override set_handle_irq() fallback

Some architectures want to provide the generic set_handle_irq() API, but
for structural reasons need to provide their own implementation. For
example, arm64 needs to do this to provide uniform set_handle_irq() and
set_handle_fiq() registration functions.

Make this possible by allowing architectures to provide their own
implementation of set_handle_irq when CONFIG_GENERIC_IRQ_MULTI_HANDLER
is not selected.

Signed-off-by: Marc Zyngier <maz@kernel.org>
[Mark: expand commit message]
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Hector Martin <marcan@marcan.st>
Cc: James Morse <james.morse@arm.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20210315115629.57191-2-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agoarm64: compat: Poison the compat sigpage
Will Deacon [Thu, 18 Mar 2021 17:07:38 +0000 (17:07 +0000)]
arm64: compat: Poison the compat sigpage

Commit 9c698bff66ab ("ARM: ensure the signal page contains defined contents")
poisoned the unused portions of the signal page for 32-bit Arm.

Implement the same poisoning for the compat signal page on arm64 rather
than using __GFP_ZERO.

Signed-off-by: Will Deacon <will@kernel.org>
Reviewed-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Link: https://lore.kernel.org/r/20210318170738.7756-6-will@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agoarm64: vdso: Avoid ISB after reading from cntvct_el0
Will Deacon [Thu, 18 Mar 2021 17:07:37 +0000 (17:07 +0000)]
arm64: vdso: Avoid ISB after reading from cntvct_el0

We can avoid the expensive ISB instruction after reading the counter in
the vDSO gettime functions by creating a fake address hazard against a
dummy stack read, just like we do inside the kernel.

Signed-off-by: Will Deacon <will@kernel.org>
Reviewed-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Link: https://lore.kernel.org/r/20210318170738.7756-5-will@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agoarm64: compat: Allow signal page to be remapped
Will Deacon [Thu, 18 Mar 2021 17:07:36 +0000 (17:07 +0000)]
arm64: compat: Allow signal page to be remapped

For compatability with 32-bit Arm, allow the compat signal page to be
remapped via mremap().

Signed-off-by: Will Deacon <will@kernel.org>
Reviewed-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Link: https://lore.kernel.org/r/20210318170738.7756-4-will@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agoarm64: vdso: Remove redundant calls to flush_dcache_page()
Will Deacon [Thu, 18 Mar 2021 17:07:35 +0000 (17:07 +0000)]
arm64: vdso: Remove redundant calls to flush_dcache_page()

flush_dcache_page() ensures that the 'PG_dcache_clean' flag for its
'page' argument is clear so that cache maintenance will be performed
if the page is mapped into userspace with execute permissions.

Newly allocated pages have this flag clear, so there is no need to call
flush_dcache_page() for the compat vdso or signal pages. Remove the
redundant calls.

Signed-off-by: Will Deacon <will@kernel.org>
Reviewed-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Link: https://lore.kernel.org/r/20210318170738.7756-3-will@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agoarm64: vdso: Use GFP_KERNEL for allocating compat vdso and signal pages
Will Deacon [Thu, 18 Mar 2021 17:07:34 +0000 (17:07 +0000)]
arm64: vdso: Use GFP_KERNEL for allocating compat vdso and signal pages

There's no need to allocate the compat vDSO and signal pages using
GFP_ATOMIC allocations, so use GFP_KERNEL instead.

Signed-off-by: Will Deacon <will@kernel.org>
Reviewed-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Link: https://lore.kernel.org/r/20210318170738.7756-2-will@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agokselftest: arm64: Add BTI tests
Mark Brown [Tue, 9 Mar 2021 19:37:31 +0000 (19:37 +0000)]
kselftest: arm64: Add BTI tests

Add some tests that verify that BTI functions correctly for static binaries
built with and without BTI support, verifying that SIGILL is generated when
expected and is not generated in other situations.

Since BTI support is still being rolled out in distributions these tests
are built entirely free standing, no libc support is used at all so none
of the standard helper functions for kselftest can be used and we open
code everything. This also means we aren't testing the kernel support for
the dynamic linker, though the test program can be readily adapted for
that once it becomes something that we can reliably build and run.

These tests were originally written by Dave Martin, I've adapted them for
kselftest, mainly around the build system and the output format.

Signed-off-by: Mark Brown <broonie@kernel.org>
Cc: Dave Martin <Dave.Martin@arm.com>
Link: https://lore.kernel.org/r/20210309193731.57247-1-broonie@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agokselftest/arm64: mte: Report filename on failing temp file creation
Andre Przywara [Fri, 19 Mar 2021 16:53:34 +0000 (16:53 +0000)]
kselftest/arm64: mte: Report filename on failing temp file creation

The MTE selftests create temporary files in /dev/shm, for later mmap-ing
them. When there is no tmpfs mounted on /dev/shm, or /dev/shm does not
exist in the first place (on minimal filesystems), the error message is
not giving good hints:
    # FAIL: Unable to open temporary file
    # FAIL: memory allocation
    not ok 17 Check initial tags with private mapping, ...

Add a perror() call, that gives both the filename and the actual error
reason, so that users get a chance of correcting that.

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Reviewed-by: Mark Brown <broone@kernel.org>
Link: https://lore.kernel.org/r/20210319165334.29213-12-andre.przywara@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agokselftest/arm64: mte: Fix clang warning
Andre Przywara [Fri, 19 Mar 2021 16:53:33 +0000 (16:53 +0000)]
kselftest/arm64: mte: Fix clang warning

if (!prctl(...) == 0) is not only cumbersome to read, it also upsets
clang and triggers a warning:
------------
mte_common_util.c:287:6: warning: logical not is only applied to the
left hand side of this comparison [-Wlogical-not-parentheses]
....

Fix that by just comparing against "not 0" instead.

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Reviewed-by: Mark Brown <broone@kernel.org>
Link: https://lore.kernel.org/r/20210319165334.29213-11-andre.przywara@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agokselftest/arm64: mte: Makefile: Fix clang compilation
Andre Przywara [Fri, 19 Mar 2021 16:53:32 +0000 (16:53 +0000)]
kselftest/arm64: mte: Makefile: Fix clang compilation

When clang finds a header file on the command line, it wants to
precompile that, which would end up in a separate output file.
Specifying -o on that same command line collides with that effort, so
the compiler complains:

clang: error: cannot specify -o when generating multiple output files

Since we are not really after a precompiled header, just drop the header
file from the command line, by removing it from the list of source
files in the Makefile.

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Reviewed-by: Mark Brown <broone@kernel.org>
Link: https://lore.kernel.org/r/20210319165334.29213-10-andre.przywara@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agokselftest/arm64: mte: Output warning about failing compiler
Andre Przywara [Fri, 19 Mar 2021 16:53:31 +0000 (16:53 +0000)]
kselftest/arm64: mte: Output warning about failing compiler

At the moment we check the compiler's ability to compile MTE enabled
code, but guard all the Makefile rules by it. As a consequence a broken
or not capable compiler just doesn't do anything, and make happily
returns without any error message, but with no programs created.

Since the MTE feature is only supported by recent aarch64 compilers (not
all stable distro compilers support it), having an explicit message
seems like a good idea. To not break building multiple targets, we let
make proceed without errors.

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Reviewed-by: Mark Brown <broone@kernel.org>
Link: https://lore.kernel.org/r/20210319165334.29213-9-andre.przywara@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agokselftest/arm64: mte: Use cross-compiler if specified
Andre Przywara [Fri, 19 Mar 2021 16:53:30 +0000 (16:53 +0000)]
kselftest/arm64: mte: Use cross-compiler if specified

At the moment we either need to provide CC explicitly, or use a native
machine to get the ARM64 MTE selftest compiled.

It seems useful to use the same (cross-)compiler as we use for the
kernel, so copy the recipe we use in the pauth selftest.

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Reviewed-by: Mark Brown <broone@kernel.org>
Link: https://lore.kernel.org/r/20210319165334.29213-8-andre.przywara@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agokselftest/arm64: mte: Fix MTE feature detection
Andre Przywara [Fri, 19 Mar 2021 16:53:29 +0000 (16:53 +0000)]
kselftest/arm64: mte: Fix MTE feature detection

To check whether the CPU and kernel support the MTE features we want
to test, we use an (emulated) CPU ID register read. However we only
check against a very particular feature version (0b0010), even though
the ARM ARM promises ID register features to be backwards compatible.

While this could be fixed by using ">=" instead of "==", we should
actually use the explicit HWCAP2_MTE hardware capability, exposed by the
kernel via the ELF auxiliary vectors.

That moves this responsibility to the kernel, and fixes running the
tests on machines with FEAT_MTE3 capability.

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Reviewed-by: Mark Brown <broone@kernel.org>
Link: https://lore.kernel.org/r/20210319165334.29213-7-andre.przywara@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agokselftest/arm64: mte: common: Fix write() warnings
Andre Przywara [Fri, 19 Mar 2021 16:53:28 +0000 (16:53 +0000)]
kselftest/arm64: mte: common: Fix write() warnings

Out of the box Ubuntu's 20.04 compiler warns about missing return value
checks for write() (sys)calls.

Make GCC happy by checking whether we actually managed to write out our
buffer.

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Reviewed-by: Mark Brown <broone@kernel.org>
Link: https://lore.kernel.org/r/20210319165334.29213-6-andre.przywara@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agokselftest/arm64: mte: user_mem: Fix write() warning
Andre Przywara [Fri, 19 Mar 2021 16:53:27 +0000 (16:53 +0000)]
kselftest/arm64: mte: user_mem: Fix write() warning

Out of the box Ubuntu's 20.04 compiler warns about missing return value
checks for write() (sys)calls.

Make GCC happy by checking whether we actually managed to write "val".

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Reviewed-by: Mark Brown <broone@kernel.org>
Link: https://lore.kernel.org/r/20210319165334.29213-5-andre.przywara@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agokselftest/arm64: mte: ksm_options: Fix fscanf warning
Andre Przywara [Fri, 19 Mar 2021 16:53:26 +0000 (16:53 +0000)]
kselftest/arm64: mte: ksm_options: Fix fscanf warning

Out of the box Ubuntu's 20.04 compiler warns about missing return value
checks for fscanf() calls.

Make GCC happy by checking whether we actually parsed one integer.

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Reviewed-by: Mark Brown <broone@kernel.org>
Link: https://lore.kernel.org/r/20210319165334.29213-4-andre.przywara@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agokselftest/arm64: mte: Fix pthread linking
Andre Przywara [Fri, 19 Mar 2021 16:53:25 +0000 (16:53 +0000)]
kselftest/arm64: mte: Fix pthread linking

The GCC manual suggests to use -pthread, when linking with the PThread
library, also to add this switch to both the compilation and linking
stages.

Do as the manual says, to fix compilation with Ubuntu's 20.04 toolchain,
which was getting -lpthread too early on the command line:
------------
/usr/bin/ld: /tmp/cc5zbo2A.o: in function `execute_test':
tools/testing/selftests/arm64/mte/check_gcr_el1_cswitch.c:86:
undefined reference to `pthread_create'
/usr/bin/ld: tools/testing/selftests/arm64/mte/check_gcr_el1_cswitch.c:90:
undefined reference to `pthread_join'
------------

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Reviewed-by: Mark Brown <broone@kernel.org>
Link: https://lore.kernel.org/r/20210319165334.29213-3-andre.przywara@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agokselftest/arm64: mte: Fix compilation with native compiler
Andre Przywara [Fri, 19 Mar 2021 16:53:24 +0000 (16:53 +0000)]
kselftest/arm64: mte: Fix compilation with native compiler

The mte selftest Makefile contains a check for GCC, to add the memtag
-march flag to the compiler options. This check fails if the compiler
is not explicitly specified, so reverts to the standard "cc", in which
case --version doesn't mention the "gcc" string we match against:
$ cc --version | head -n 1
cc (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0

This will not add the -march switch to the command line, so compilation
fails:
mte_helper.S: Assembler messages:
mte_helper.S:25: Error: selected processor does not support `irg x0,x0,xzr'
mte_helper.S:38: Error: selected processor does not support `gmi x1,x0,xzr'
...

Actually clang accepts the same -march option as well, so we can just
drop this check and add this unconditionally to the command line, to avoid
any future issues with this check altogether (gcc actually prints
basename(argv[0]) when called with --version).

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Reviewed-by: Nick Desaulniers <ndesaulniers@google.com>
Reviewed-by: Mark Brown <broone@kernel.org>
Link: https://lore.kernel.org/r/20210319165334.29213-2-andre.przywara@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agoarm64: mm: use XN table mapping attributes for user/kernel mappings
Ard Biesheuvel [Wed, 10 Mar 2021 10:49:42 +0000 (11:49 +0100)]
arm64: mm: use XN table mapping attributes for user/kernel mappings

As the kernel and user space page tables are strictly mutually exclusive
when it comes to executable permissions, we can set the UXN table attribute
on all table entries that are created while creating kernel mappings in the
swapper page tables, and the PXN table attribute on all table entries that
are created while creating user space mappings in user space page tables.

While at it, get rid of a redundant comment.

Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20210310104942.174584-4-ardb@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agoarm64: mm: use XN table mapping attributes for the linear region
Ard Biesheuvel [Wed, 10 Mar 2021 10:49:41 +0000 (11:49 +0100)]
arm64: mm: use XN table mapping attributes for the linear region

The way the arm64 kernel virtual address space is constructed guarantees
that swapper PGD entries are never shared between the linear region on
the one hand, and the vmalloc region on the other, which is where all
kernel text, module text and BPF text mappings reside.

This means that mappings in the linear region (which never require
executable permissions) never share any table entries at any level with
mappings that do require executable permissions, and so we can set the
table-level PXN attributes for all table entries that are created while
setting up mappings in the linear region. Since swapper's PGD level page
table is mapped r/o itself, this adds another layer of robustness to the
way the kernel manages its own page tables. While at it, set the UXN
attribute as well for all kernel mappings created at boot.

Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
Link: https://lore.kernel.org/r/20210310104942.174584-3-ardb@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agoarm64: mm: add missing P4D definitions and use them consistently
Ard Biesheuvel [Wed, 10 Mar 2021 10:49:40 +0000 (11:49 +0100)]
arm64: mm: add missing P4D definitions and use them consistently

Even though level 0, 1 and 2 descriptors share the same attribute
encodings, let's be a bit more consistent about using the right one at
the right level. So add new macros for level 0/P4D definitions, and
clean up some inconsistencies involving these macros.

Acked-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20210310104942.174584-2-ardb@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agoARM64: enable GENERIC_FIND_FIRST_BIT
Yury Norov [Thu, 25 Feb 2021 13:56:59 +0000 (05:56 -0800)]
ARM64: enable GENERIC_FIND_FIRST_BIT

ARM64 doesn't implement find_first_{zero}_bit in arch code and doesn't
enable it in a config. It leads to using find_next_bit() which is less
efficient:

0000000000000000 <find_first_bit>:
   0: aa0003e4  mov x4, x0
   4: aa0103e0  mov x0, x1
   8: b4000181  cbz x1, 38 <find_first_bit+0x38>
   c: f9400083  ldr x3, [x4]
  10: d2800802  mov x2, #0x40                   // #64
  14: 91002084  add x4, x4, #0x8
  18: b40000c3  cbz x3, 30 <find_first_bit+0x30>
  1c: 14000008  b 3c <find_first_bit+0x3c>
  20: f8408483  ldr x3, [x4], #8
  24: 91010045  add x5, x2, #0x40
  28: b50000c3  cbnz x3, 40 <find_first_bit+0x40>
  2c: aa0503e2  mov x2, x5
  30: eb02001f  cmp x0, x2
  34: 54ffff68  b.hi 20 <find_first_bit+0x20>  // b.pmore
  38: d65f03c0  ret
  3c: d2800002  mov x2, #0x0                    // #0
  40: dac00063  rbit x3, x3
  44: dac01063  clz x3, x3
  48: 8b020062  add x2, x3, x2
  4c: eb02001f  cmp x0, x2
  50: 9a829000  csel x0, x0, x2, ls  // ls = plast
  54: d65f03c0  ret

  ...

0000000000000118 <_find_next_bit.constprop.1>:
 118: eb02007f  cmp x3, x2
 11c: 540002e2  b.cs 178 <_find_next_bit.constprop.1+0x60>  // b.hs, b.nlast
 120: d346fc66  lsr x6, x3, #6
 124: f8667805  ldr x5, [x0, x6, lsl #3]
 128: b4000061  cbz x1, 134 <_find_next_bit.constprop.1+0x1c>
 12c: f8667826  ldr x6, [x1, x6, lsl #3]
 130: 8a0600a5  and x5, x5, x6
 134: ca0400a6  eor x6, x5, x4
 138: 92800005  mov x5, #0xffffffffffffffff     // #-1
 13c: 9ac320a5  lsl x5, x5, x3
 140: 927ae463  and x3, x3, #0xffffffffffffffc0
 144: ea0600a5  ands x5, x5, x6
 148: 54000120  b.eq 16c <_find_next_bit.constprop.1+0x54>  // b.none
 14c: 1400000e  b 184 <_find_next_bit.constprop.1+0x6c>
 150: d346fc66  lsr x6, x3, #6
 154: f8667805  ldr x5, [x0, x6, lsl #3]
 158: b4000061  cbz x1, 164 <_find_next_bit.constprop.1+0x4c>
 15c: f8667826  ldr x6, [x1, x6, lsl #3]
 160: 8a0600a5  and x5, x5, x6
 164: eb05009f  cmp x4, x5
 168: 540000c1  b.ne 180 <_find_next_bit.constprop.1+0x68>  // b.any
 16c: 91010063  add x3, x3, #0x40
 170: eb03005f  cmp x2, x3
 174: 54fffee8  b.hi 150 <_find_next_bit.constprop.1+0x38>  // b.pmore
 178: aa0203e0  mov x0, x2
 17c: d65f03c0  ret
 180: ca050085  eor x5, x4, x5
 184: dac000a5  rbit x5, x5
 188: dac010a5  clz x5, x5
 18c: 8b0300a3  add x3, x5, x3
 190: eb03005f  cmp x2, x3
 194: 9a839042  csel x2, x2, x3, ls  // ls = plast
 198: aa0203e0  mov x0, x2
 19c: d65f03c0  ret

 ...

0000000000000238 <find_next_bit>:
 238: a9bf7bfd  stp x29, x30, [sp, #-16]!
 23c: aa0203e3  mov x3, x2
 240: d2800004  mov x4, #0x0                    // #0
 244: aa0103e2  mov x2, x1
 248: 910003fd  mov x29, sp
 24c: d2800001  mov x1, #0x0                    // #0
 250: 97ffffb2  bl 118 <_find_next_bit.constprop.1>
 254: a8c17bfd  ldp x29, x30, [sp], #16
 258: d65f03c0  ret

Enabling find_{first,next}_bit() would also benefit for_each_{set,clear}_bit().
On A-53 find_first_bit() is almost twice faster than find_next_bit(), according
to lib/find_bit_benchmark (thanks to Alexey for testing):

GENERIC_FIND_FIRST_BIT=n:
[7126084.948181] find_first_bit:               47389224 ns,  16357 iterations
[7126085.032315] find_first_bit:               19048193 ns,    655 iterations

GENERIC_FIND_FIRST_BIT=y:
[   84.158068] find_first_bit:               27193319 ns,  16406 iterations
[   84.233005] find_first_bit:               11082437 ns,    656 iterations

GENERIC_FIND_FIRST_BIT=n bloats the kernel despite that it disables generation
of find_{first,next}_bit():

        yury:linux$ scripts/bloat-o-meter vmlinux vmlinux.ffb
        add/remove: 4/1 grow/shrink: 19/251 up/down: 564/-1692 (-1128)
        ...

Overall, GENERIC_FIND_FIRST_BIT=n is harmful both in terms of performance and
code size, and it's better to have GENERIC_FIND_FIRST_BIT enabled.

Tested-by: Alexey Klimov <aklimov@redhat.com>
Signed-off-by: Yury Norov <yury.norov@gmail.com>
Acked-by: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20210225135700.1381396-2-yury.norov@gmail.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agoarm64: defconfig: Use DEBUG_INFO_REDUCED
Mark Brown [Thu, 4 Mar 2021 17:44:07 +0000 (17:44 +0000)]
arm64: defconfig: Use DEBUG_INFO_REDUCED

We've had DEBUG_INFO enabled for arm64 defconfigs since the initial
commit.  This is probably not that frequently used but substantially
inflates the size of the build tree and amount of I/O needed during the
build.  This was causing issues with storage usage in some automated CI
environments which don't expect defconfigs to be quite this big, and
generally increases the resource consumption for both them and people
doing local builds.  The main use case for the debug info is decoding
things with scripts/faddr2line but that doesn't need the full
DEBUG_INFO, DEBUG_INFO_REDUCED is enough for it, so enable that by
default.

Without this patch my build tree is 6.8G, with it the size drops to 2G
(smaller than the 6.4G for allmodconfig!).

Suggested-by: Catalin Marinas <catalin.marinas@arm.com>
Reported-by: Guillaume Tucker <guillaume.tucker@collabora.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
Acked-by: Kevin Hilman <khilman@baylibre.com>
Acked-by: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20210304174407.17537-1-broonie@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 years agoLinux 5.12-rc3
Linus Torvalds [Sun, 14 Mar 2021 21:41:02 +0000 (14:41 -0700)]
Linux 5.12-rc3

3 years agoprctl: fix PR_SET_MM_AUXV kernel stack leak
Alexey Dobriyan [Sun, 14 Mar 2021 20:51:14 +0000 (23:51 +0300)]
prctl: fix PR_SET_MM_AUXV kernel stack leak

Doing a

prctl(PR_SET_MM, PR_SET_MM_AUXV, addr, 1);

will copy 1 byte from userspace to (quite big) on-stack array
and then stash everything to mm->saved_auxv.
AT_NULL terminator will be inserted at the very end.

/proc/*/auxv handler will find that AT_NULL terminator
and copy original stack contents to userspace.

This devious scheme requires CAP_SYS_RESOURCE.

Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
3 years agoMerge tag 'irq-urgent-2021-03-14' of git://git.kernel.org/pub/scm/linux/kernel/git...
Linus Torvalds [Sun, 14 Mar 2021 20:33:33 +0000 (13:33 -0700)]
Merge tag 'irq-urgent-2021-03-14' of git://git./linux/kernel/git/tip/tip

Pull irq fixes from Thomas Gleixner:
 "A set of irqchip updates:

   - Make the GENERIC_IRQ_MULTI_HANDLER configuration correct

   - Add a missing DT compatible string for the Ingenic driver

   - Remove the pointless debugfs_file pointer from struct irqdomain"

* tag 'irq-urgent-2021-03-14' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  irqchip/ingenic: Add support for the JZ4760
  dt-bindings/irq: Add compatible string for the JZ4760B
  irqchip: Do not blindly select CONFIG_GENERIC_IRQ_MULTI_HANDLER
  ARM: ep93xx: Select GENERIC_IRQ_MULTI_HANDLER directly
  irqdomain: Remove debugfs_file from struct irq_domain

3 years agoMerge tag 'timers-urgent-2021-03-14' of git://git.kernel.org/pub/scm/linux/kernel...
Linus Torvalds [Sun, 14 Mar 2021 20:29:38 +0000 (13:29 -0700)]
Merge tag 'timers-urgent-2021-03-14' of git://git./linux/kernel/git/tip/tip

Pull timer fix from Thomas Gleixner:
 "A single fix in for hrtimers to prevent an interrupt storm caused by
  the lack of reevaluation of the timers which expire in softirq context
  under certain circumstances, e.g. when the clock was set"

* tag 'timers-urgent-2021-03-14' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  hrtimer: Update softirq_expires_next correctly after __hrtimer_get_next_event()

3 years agoMerge tag 'sched-urgent-2021-03-14' of git://git.kernel.org/pub/scm/linux/kernel...
Linus Torvalds [Sun, 14 Mar 2021 20:27:06 +0000 (13:27 -0700)]
Merge tag 'sched-urgent-2021-03-14' of git://git./linux/kernel/git/tip/tip

Pull scheduler fixes from Thomas Gleixner:
 "A set of scheduler updates:

   - Prevent a NULL pointer dereference in the migration_stop_cpu()
     mechanims

   - Prevent self concurrency of affine_move_task()

   - Small fixes and cleanups related to task migration/affinity setting

   - Ensure that sync_runqueues_membarrier_state() is invoked on the
     current CPU when it is in the cpu mask"

* tag 'sched-urgent-2021-03-14' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  sched/membarrier: fix missing local execution of ipi_sync_rq_state()
  sched: Simplify set_affinity_pending refcounts
  sched: Fix affine_move_task() self-concurrency
  sched: Optimize migration_cpu_stop()
  sched: Collate affine_move_task() stoppers
  sched: Simplify migration_cpu_stop()
  sched: Fix migration_cpu_stop() requeueing

3 years agoMerge tag 'objtool-urgent-2021-03-14' of git://git.kernel.org/pub/scm/linux/kernel...
Linus Torvalds [Sun, 14 Mar 2021 20:15:55 +0000 (13:15 -0700)]
Merge tag 'objtool-urgent-2021-03-14' of git://git./linux/kernel/git/tip/tip

Pull objtool fix from Thomas Gleixner:
 "A single objtool fix to handle the PUSHF/POPF validation correctly for
  the paravirt changes which modified arch_local_irq_restore not to use
  popf"

* tag 'objtool-urgent-2021-03-14' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  objtool,x86: Fix uaccess PUSHF/POPF validation

3 years agoMerge tag 'locking-urgent-2021-03-14' of git://git.kernel.org/pub/scm/linux/kernel...
Linus Torvalds [Sun, 14 Mar 2021 20:03:21 +0000 (13:03 -0700)]
Merge tag 'locking-urgent-2021-03-14' of git://git./linux/kernel/git/tip/tip

Pull locking fixes from Thomas Gleixner:
 "A couple of locking fixes:

   - A fix for the static_call mechanism so it handles unaligned
     addresses correctly.

   - Make u64_stats_init() a macro so every instance gets a seperate
     lockdep key.

   - Make seqcount_latch_init() a macro as well to preserve the static
     variable which is used for the lockdep key"

* tag 'locking-urgent-2021-03-14' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  seqlock,lockdep: Fix seqcount_latch_init()
  u64_stats,lockdep: Fix u64_stats_init() vs lockdep
  static_call: Fix the module key fixup

3 years agoMerge tag 'perf_urgent_for_v5.12-rc3' of git://git.kernel.org/pub/scm/linux/kernel...
Linus Torvalds [Sun, 14 Mar 2021 19:57:17 +0000 (12:57 -0700)]
Merge tag 'perf_urgent_for_v5.12-rc3' of git://git./linux/kernel/git/tip/tip

Pull perf fixes from Borislav Petkov:

 - Make sure PMU internal buffers are flushed for per-CPU events too and
   properly handle PID/TID for large PEBS.

 - Handle the case properly when there's no PMU and therefore return an
   empty list of perf MSRs for VMX to switch instead of reading random
   garbage from the stack.

* tag 'perf_urgent_for_v5.12-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  x86/perf: Use RET0 as default for guest_get_msrs to handle "no PMU" case
  perf/x86/intel: Set PERF_ATTACH_SCHED_CB for large PEBS and LBR
  perf/core: Flush PMU internal buffers for per-CPU events

3 years agoMerge tag 'efi-urgent-for-v5.12-rc2' of git://git.kernel.org/pub/scm/linux/kernel...
Linus Torvalds [Sun, 14 Mar 2021 19:54:56 +0000 (12:54 -0700)]
Merge tag 'efi-urgent-for-v5.12-rc2' of git://git./linux/kernel/git/tip/tip

Pull EFI fix from Ard Biesheuvel via Borislav Petkov:
 "Fix an oversight in the handling of EFI_RT_PROPERTIES_TABLE, which was
  added v5.10, but failed to take the SetVirtualAddressMap() RT service
  into account"

* tag 'efi-urgent-for-v5.12-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  efi: stub: omit SetVirtualAddressMap() if marked unsupported in RT_PROP table

3 years agoMerge tag 'x86_urgent_for_v5.12_rc3' of git://git.kernel.org/pub/scm/linux/kernel...
Linus Torvalds [Sun, 14 Mar 2021 19:48:10 +0000 (12:48 -0700)]
Merge tag 'x86_urgent_for_v5.12_rc3' of git://git./linux/kernel/git/tip/tip

Pull x86 fixes from Borislav Petkov:

 - A couple of SEV-ES fixes and robustifications: verify usermode stack
   pointer in NMI is not coming from the syscall gap, correctly track
   IRQ states in the #VC handler and access user insn bytes atomically
   in same handler as latter cannot sleep.

 - Balance 32-bit fast syscall exit path to do the proper work on exit
   and thus not confuse audit and ptrace frameworks.

 - Two fixes for the ORC unwinder going "off the rails" into KASAN
   redzones and when ORC data is missing.

* tag 'x86_urgent_for_v5.12_rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  x86/sev-es: Use __copy_from_user_inatomic()
  x86/sev-es: Correctly track IRQ states in runtime #VC handler
  x86/sev-es: Check regs->sp is trusted before adjusting #VC IST stack
  x86/sev-es: Introduce ip_within_syscall_gap() helper
  x86/entry: Fix entry/exit mismatch on failed fast 32-bit syscalls
  x86/unwind/orc: Silence warnings caused by missing ORC data
  x86/unwind/orc: Disable KASAN checking in the ORC unwinder, part 2