platform/kernel/linux-starfive.git
2 years agoMerge branch 'for-next/pfn-valid' into for-next/core
Will Deacon [Fri, 29 Oct 2021 11:25:19 +0000 (12:25 +0100)]
Merge branch 'for-next/pfn-valid' into for-next/core

* for-next/pfn-valid:
  arm64/mm: drop HAVE_ARCH_PFN_VALID
  dma-mapping: remove bogus test for pfn_valid from dma_map_resource

2 years agoMerge branch 'for-next/perf' into for-next/core
Will Deacon [Fri, 29 Oct 2021 11:25:12 +0000 (12:25 +0100)]
Merge branch 'for-next/perf' into for-next/core

* for-next/perf:
  drivers/perf: Improve build test coverage
  drivers/perf: thunderx2_pmu: Change data in size tx2_uncore_event_update()
  drivers/perf: hisi: Fix PA PMU counter offset

2 years agoMerge branch 'for-next/mte' into for-next/core
Will Deacon [Fri, 29 Oct 2021 11:25:08 +0000 (12:25 +0100)]
Merge branch 'for-next/mte' into for-next/core

* for-next/mte:
  kasan: Extend KASAN mode kernel parameter
  arm64: mte: Add asymmetric mode support
  arm64: mte: CPU feature detection for Asymm MTE
  arm64: mte: Bitfield definitions for Asymm MTE
  kasan: Remove duplicate of kasan_flag_async
  arm64: kasan: mte: move GCR_EL1 switch to task switch when KASAN disabled

2 years agoMerge branch 'for-next/mm' into for-next/core
Will Deacon [Fri, 29 Oct 2021 11:25:04 +0000 (12:25 +0100)]
Merge branch 'for-next/mm' into for-next/core

* for-next/mm:
  arm64: mm: update max_pfn after memory hotplug
  arm64/mm: Add pud_sect_supported()
  arm64: mm: Drop pointless call to set_max_mapnr()

2 years agoMerge branch 'for-next/misc' into for-next/core
Will Deacon [Fri, 29 Oct 2021 11:24:59 +0000 (12:24 +0100)]
Merge branch 'for-next/misc' into for-next/core

* for-next/misc:
  arm64: Select POSIX_CPU_TIMERS_TASK_WORK
  arm64: Document boot requirements for FEAT_SME_FA64
  arm64: ftrace: use function_nocfi for _mcount as well
  arm64: asm: setup.h: export common variables
  arm64/traps: Avoid unnecessary kernel/user pointer conversion

2 years agoMerge branch 'for-next/kselftest' into for-next/core
Will Deacon [Fri, 29 Oct 2021 11:24:53 +0000 (12:24 +0100)]
Merge branch 'for-next/kselftest' into for-next/core

* for-next/kselftest:
  selftests: arm64: Factor out utility functions for assembly FP tests
  selftests: arm64: Add coverage of ptrace flags for SVE VL inheritance
  selftests: arm64: Verify that all possible vector lengths are handled
  selftests: arm64: Fix and enable test for setting current VL in vec-syscfg
  selftests: arm64: Remove bogus error check on writing to files
  selftests: arm64: Fix printf() format mismatch in vec-syscfg
  selftests: arm64: Move FPSIMD in SVE ptrace test into a function
  selftests: arm64: More comprehensively test the SVE ptrace interface
  selftests: arm64: Verify interoperation of SVE and FPSIMD register sets
  selftests: arm64: Clarify output when verifying SVE register set
  selftests: arm64: Document what the SVE ptrace test is doing
  selftests: arm64: Remove extraneous register setting code
  selftests: arm64: Don't log child creation as a test in SVE ptrace test
  selftests: arm64: Use a define for the number of SVE ptrace tests to be run

2 years agoMerge branch 'for-next/kexec' into for-next/core
Will Deacon [Fri, 29 Oct 2021 11:24:47 +0000 (12:24 +0100)]
Merge branch 'for-next/kexec' into for-next/core

* for-next/kexec:
  arm64: trans_pgd: remove trans_pgd_map_page()
  arm64: kexec: remove cpu-reset.h
  arm64: kexec: remove the pre-kexec PoC maintenance
  arm64: kexec: keep MMU enabled during kexec relocation
  arm64: kexec: install a copy of the linear-map
  arm64: kexec: use ld script for relocation function
  arm64: kexec: relocate in EL1 mode
  arm64: kexec: configure EL2 vectors for kexec
  arm64: kexec: pass kimage as the only argument to relocation function
  arm64: kexec: Use dcache ops macros instead of open-coding
  arm64: kexec: skip relocation code for inplace kexec
  arm64: kexec: flush image and lists during kexec load time
  arm64: hibernate: abstract ttrb0 setup function
  arm64: trans_pgd: hibernate: Add trans_pgd_copy_el2_vectors
  arm64: kernel: add helper for booted at EL2 and not VHE

2 years agoMerge branch 'for-next/extable' into for-next/core
Will Deacon [Fri, 29 Oct 2021 11:24:37 +0000 (12:24 +0100)]
Merge branch 'for-next/extable' into for-next/core

* for-next/extable:
  arm64: vmlinux.lds.S: remove `.fixup` section
  arm64: extable: add load_unaligned_zeropad() handler
  arm64: extable: add a dedicated uaccess handler
  arm64: extable: add `type` and `data` fields
  arm64: extable: use `ex` for `exception_table_entry`
  arm64: extable: make fixup_exception() return bool
  arm64: extable: consolidate definitions
  arm64: gpr-num: support W registers
  arm64: factor out GPR numbering helpers
  arm64: kvm: use kvm_exception_table_entry
  arm64: lib: __arch_copy_to_user(): fold fixups into body
  arm64: lib: __arch_copy_from_user(): fold fixups into body
  arm64: lib: __arch_clear_user(): fold fixups into body

2 years agoMerge branch 'for-next/8.6-timers' into for-next/core
Will Deacon [Fri, 29 Oct 2021 11:20:21 +0000 (12:20 +0100)]
Merge branch 'for-next/8.6-timers' into for-next/core

* for-next/8.6-timers:
  arm64: Add HWCAP for self-synchronising virtual counter
  arm64: Add handling of CNTVCTSS traps
  arm64: Add CNT{P,V}CTSS_EL0 alternatives to cnt{p,v}ct_el0
  arm64: Add a capability for FEAT_ECV
  clocksource/drivers/arch_arm_timer: Move workaround synchronisation around
  clocksource/drivers/arm_arch_timer: Fix masking for high freq counters
  clocksource/drivers/arm_arch_timer: Drop unnecessary ISB on CVAL programming
  clocksource/drivers/arm_arch_timer: Remove any trace of the TVAL programming interface
  clocksource/drivers/arm_arch_timer: Work around broken CVAL implementations
  clocksource/drivers/arm_arch_timer: Advertise 56bit timer to the core code
  clocksource/drivers/arm_arch_timer: Move MMIO timer programming over to CVAL
  clocksource/drivers/arm_arch_timer: Fix MMIO base address vs callback ordering issue
  clocksource/drivers/arm_arch_timer: Move drop _tval from erratum function names
  clocksource/drivers/arm_arch_timer: Move system register timer programming over to CVAL
  clocksource/drivers/arm_arch_timer: Extend write side of timer register accessors to u64
  clocksource/drivers/arm_arch_timer: Drop CNT*_TVAL read accessors
  clocksource/arm_arch_timer: Add build-time guards for unhandled register accesses

2 years agoarm64: Select POSIX_CPU_TIMERS_TASK_WORK
Nicolas Saenz Julienne [Mon, 18 Oct 2021 14:47:13 +0000 (16:47 +0200)]
arm64: Select POSIX_CPU_TIMERS_TASK_WORK

With 6caa5812e2d1 ("KVM: arm64: Use generic KVM xfer to guest work
function") all arm64 exit paths are properly equipped to handle the
POSIX timers' task work.

Deferring timer callbacks to thread context, not only limits the amount
of time spent in hard interrupt context, but is a safer
implementation[1], and will allow PREEMPT_RT setups to use KVM[2].

So let's enable POSIX_CPU_TIMERS_TASK_WORK on arm64.

[1] https://lore.kernel.org/all/20200716201923.228696399@linutronix.de/
[2] https://lore.kernel.org/linux-rt-users/87v92bdnlx.ffs@tglx/

Signed-off-by: Nicolas Saenz Julienne <nsaenzju@redhat.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20211018144713.873464-1-nsaenzju@redhat.com
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64: Document boot requirements for FEAT_SME_FA64
Mark Brown [Tue, 26 Oct 2021 11:18:02 +0000 (12:18 +0100)]
arm64: Document boot requirements for FEAT_SME_FA64

The EAC1 release of the SME specification adds the FA64 feature which
requires enablement at higher ELs before lower ELs can use it. Document
what we require from higher ELs in our boot requirements.

Signed-off-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20211026111802.12853-1-broonie@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoselftests: arm64: Factor out utility functions for assembly FP tests
Mark Brown [Tue, 19 Oct 2021 18:18:51 +0000 (19:18 +0100)]
selftests: arm64: Factor out utility functions for assembly FP tests

The various floating point test programs written in assembly have a bunch
of helper functions and macros which are cut'n'pasted between them. Factor
them out into a separate source file which is linked into all of them.

We don't include memcmp() since it isn't as generic as it should be and
directly branches to report an error in the programs.

Signed-off-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20211019181851.3341232-1-broonie@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64: vmlinux.lds.S: remove `.fixup` section
Mark Rutland [Tue, 19 Oct 2021 16:02:19 +0000 (17:02 +0100)]
arm64: vmlinux.lds.S: remove `.fixup` section

We no longer place anything into a `.fixup` section, so we no longer
need to place those sections into the `.text` section in the main kernel
Image.

Remove the use of `.fixup`.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Robin Murphy <robin.murphy@arm.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20211019160219.5202-14-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64: extable: add load_unaligned_zeropad() handler
Mark Rutland [Tue, 19 Oct 2021 16:02:18 +0000 (17:02 +0100)]
arm64: extable: add load_unaligned_zeropad() handler

For inline assembly, we place exception fixups out-of-line in the
`.fixup` section such that these are out of the way of the fast path.
This has a few drawbacks:

* Since the fixup code is anonymous, backtraces will symbolize fixups as
  offsets from the nearest prior symbol, currently
  `__entry_tramp_text_end`. This is confusing, and painful to debug
  without access to the relevant vmlinux.

* Since the exception handler adjusts the PC to execute the fixup, and
  the fixup uses a direct branch back into the function it fixes,
  backtraces of fixups miss the original function. This is confusing,
  and violates requirements for RELIABLE_STACKTRACE (and therefore
  LIVEPATCH).

* Inline assembly and associated fixups are generated from templates,
  and we have many copies of logically identical fixups which only
  differ in which specific registers are written to and which address is
  branched to at the end of the fixup. This is potentially wasteful of
  I-cache resources, and makes it hard to add additional logic to fixups
  without significant bloat.

* In the case of load_unaligned_zeropad(), the logic in the fixup
  requires a temporary register that we must allocate even in the
  fast-path where it will not be used.

This patch address all four concerns for load_unaligned_zeropad() fixups
by adding a dedicated exception handler which performs the fixup logic
in exception context and subsequent returns back after the faulting
instruction. For the moment, the fixup logic is identical to the old
assembly fixup logic, but in future we could enhance this by taking the
ESR and FAR into account to constrain the faults we try to fix up, or to
specialize fixups for MTE tag check faults.

Other than backtracing, there should be no functional change as a result
of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Robin Murphy <robin.murphy@arm.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20211019160219.5202-13-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64: extable: add a dedicated uaccess handler
Mark Rutland [Tue, 19 Oct 2021 16:02:17 +0000 (17:02 +0100)]
arm64: extable: add a dedicated uaccess handler

For inline assembly, we place exception fixups out-of-line in the
`.fixup` section such that these are out of the way of the fast path.
This has a few drawbacks:

* Since the fixup code is anonymous, backtraces will symbolize fixups as
  offsets from the nearest prior symbol, currently
  `__entry_tramp_text_end`. This is confusing, and painful to debug
  without access to the relevant vmlinux.

* Since the exception handler adjusts the PC to execute the fixup, and
  the fixup uses a direct branch back into the function it fixes,
  backtraces of fixups miss the original function. This is confusing,
  and violates requirements for RELIABLE_STACKTRACE (and therefore
  LIVEPATCH).

* Inline assembly and associated fixups are generated from templates,
  and we have many copies of logically identical fixups which only
  differ in which specific registers are written to and which address is
  branched to at the end of the fixup. This is potentially wasteful of
  I-cache resources, and makes it hard to add additional logic to fixups
  without significant bloat.

This patch address all three concerns for inline uaccess fixups by
adding a dedicated exception handler which updates registers in
exception context and subsequent returns back into the function which
faulted, removing the need for fixups specialized to each faulting
instruction.

Other than backtracing, there should be no functional change as a result
of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Robin Murphy <robin.murphy@arm.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20211019160219.5202-12-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64: extable: add `type` and `data` fields
Mark Rutland [Tue, 19 Oct 2021 16:02:16 +0000 (17:02 +0100)]
arm64: extable: add `type` and `data` fields

Subsequent patches will add specialized handlers for fixups, in addition
to the simple PC fixup and BPF handlers we have today. In preparation,
this patch adds a new `type` field to struct exception_table_entry, and
uses this to distinguish the fixup and BPF cases. A `data` field is also
added so that subsequent patches can associate data specific to each
exception site (e.g. register numbers).

Handlers are named ex_handler_*() for consistency, following the exmaple
of x86. At the same time, get_ex_fixup() is split out into a helper so
that it can be used by other ex_handler_*() functions ins subsequent
patches.

This patch will increase the size of the exception tables, which will be
remedied by subsequent patches removing redundant fixup code. There
should be no functional change as a result of this patch.

Since each entry is now 12 bytes in size, we must reduce the alignment
of each entry from `.align 3` (i.e. 8 bytes) to `.align 2` (i.e. 4
bytes), which is the natrual alignment of the `insn` and `fixup` fields.
The current 8-byte alignment is a holdover from when the `insn` and
`fixup` fields was 8 bytes, and while not harmful has not been necessary
since commit:

  6c94f27ac847ff8e ("arm64: switch to relative exception tables")

Similarly, RO_EXCEPTION_TABLE_ALIGN is dropped to 4 bytes.

Concurrently with this patch, x86's exception table entry format is
being updated (similarly to a 12-byte format, with 32-bytes of absolute
data). Once both have been merged it should be possible to unify the
sorttable logic for the two.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Cc: Alexei Starovoitov <ast@kernel.org>
Cc: Andrii Nakryiko <andrii@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: James Morse <james.morse@arm.com>
Cc: Jean-Philippe Brucker <jean-philippe@linaro.org>
Cc: Robin Murphy <robin.murphy@arm.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20211019160219.5202-11-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64: extable: use `ex` for `exception_table_entry`
Mark Rutland [Tue, 19 Oct 2021 16:02:15 +0000 (17:02 +0100)]
arm64: extable: use `ex` for `exception_table_entry`

Subsequent patches will extend `struct exception_table_entry` with more
fields, and the distinction between the entry and its `fixup` field will
become more important.

For clarity, let's consistently use `ex` to refer to refer to an entire
entry. In subsequent patches we'll use `fixup` to refer to the fixup
field specifically. This matches the naming convention used today in
arch/arm64/net/bpf_jit_comp.c.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Robin Murphy <robin.murphy@arm.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20211019160219.5202-10-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64: extable: make fixup_exception() return bool
Mark Rutland [Tue, 19 Oct 2021 16:02:14 +0000 (17:02 +0100)]
arm64: extable: make fixup_exception() return bool

The return values of fixup_exception() and arm64_bpf_fixup_exception()
represent a boolean condition rather than an error code, so for clarity
it would be better to return `bool` rather than `int`.

This patch adjusts the code accordingly. While we're modifying the
prototype, we also remove the unnecessary `extern` keyword, so that this
won't look out of place when we make subsequent additions to the header.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Cc: Alexei Starovoitov <ast@kernel.org>
Cc: Andrii Nakryiko <andrii@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: James Morse <james.morse@arm.com>
Cc: Jean-Philippe Brucker <jean-philippe@linaro.org>
Cc: Robin Murphy <robin.murphy@arm.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20211019160219.5202-9-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64: extable: consolidate definitions
Mark Rutland [Tue, 19 Oct 2021 16:02:13 +0000 (17:02 +0100)]
arm64: extable: consolidate definitions

In subsequent patches we'll alter the structure and usage of struct
exception_table_entry. For inline assembly, we create these using the
`_ASM_EXTABLE()` CPP macro defined in <asm/uaccess.h>, and for plain
assembly code we use the `_asm_extable()` GAS macro defined in
<asm/assembler.h>, which are largely identical save for different
escaping and stringification requirements.

This patch moves the common definitions to a new <asm/asm-extable.h>
header, so that it's easier to keep the two in-sync, and to remove the
implication that these are only used for uaccess helpers (as e.g.
load_unaligned_zeropad() is only used on kernel memory, and depends upon
`_ASM_EXTABLE()`.

At the same time, a few minor modifications are made for clarity and in
preparation for subsequent patches:

* The structure creation is factored out into an `__ASM_EXTABLE_RAW()`
  macro. This will make it easier to support different fixup variants in
  subsequent patches without needing to update all users of
  `_ASM_EXTABLE()`, and makes it easier to see tha the CPP and GAS
  variants of the macros are structurally identical.

  For the CPP macro, the stringification of fields is left to the
  wrapper macro, `_ASM_EXTABLE()`, as in subsequent patches it will be
  necessary to stringify fields in wrapper macros to safely concatenate
  strings which cannot be token-pasted together in CPP.

* The fields of the structure are created separately on their own lines.
  This will make it easier to add/remove/modify individual fields
  clearly.

* Additional parentheses are added around the use of macro arguments in
  field definitions to avoid any potential problems with evaluation due
  to operator precedence, and to make errors upon misuse clearer.

* USER() is moved into <asm/asm-uaccess.h>, as it is not required by all
  assembly code, and is already refered to by comments in that file.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Robin Murphy <robin.murphy@arm.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20211019160219.5202-8-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64: gpr-num: support W registers
Mark Rutland [Tue, 19 Oct 2021 16:02:12 +0000 (17:02 +0100)]
arm64: gpr-num: support W registers

In subsequent patches we'll want to map W registers to their register
numbers. Update gpr-num.h so that we can do this.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Robin Murphy <robin.murphy@arm.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20211019160219.5202-7-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64: factor out GPR numbering helpers
Mark Rutland [Tue, 19 Oct 2021 16:02:11 +0000 (17:02 +0100)]
arm64: factor out GPR numbering helpers

In <asm/sysreg.h> we have macros to convert the names of general purpose
registers (GPRs) into integer constants, which we use to manually build
the encoding for `MRS` and `MSR` instructions where we can't rely on the
assembler to do so for us.

In subsequent patches we'll need to map the same GPR names to integer
constants so that we can use this to build metadata for exception
fixups.

So that the we can use the mappings elsewhere, factor out the
definitions into a new <asm/gpr-num.h> header, renaming the definitions
to align with this "GPR num" naming for clarity.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Robin Murphy <robin.murphy@arm.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20211019160219.5202-6-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64: kvm: use kvm_exception_table_entry
Mark Rutland [Tue, 19 Oct 2021 16:02:10 +0000 (17:02 +0100)]
arm64: kvm: use kvm_exception_table_entry

In subsequent patches we'll alter `struct exception_table_entry`, adding
fields that are not needed for KVM exception fixups.

In preparation for this, migrate KVM to its own `struct
kvm_exception_table_entry`, which is identical to the current format of
`struct exception_table_entry`. Comments are updated accordingly.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Cc: Alexandru Elisei <alexandru.elisei@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Robin Murphy <robin.murphy@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will@kernel.org>
Acked-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20211019160219.5202-5-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64: lib: __arch_copy_to_user(): fold fixups into body
Mark Rutland [Tue, 19 Oct 2021 16:02:09 +0000 (17:02 +0100)]
arm64: lib: __arch_copy_to_user(): fold fixups into body

Like other functions, __arch_copy_to_user() places its exception fixups
in the `.fixup` section without any clear association with
__arch_copy_to_user() itself. If we backtrace the fixup code, it will be
symbolized as an offset from the nearest prior symbol, which happens to
be `__entry_tramp_text_end`. Further, since the PC adjustment for the
fixup is akin to a direct branch rather than a function call,
__arch_copy_to_user() itself will be missing from the backtrace.

This is confusing and hinders debugging. In general this pattern will
also be problematic for CONFIG_LIVEPATCH, since fixups often return to
their associated function, but this isn't accurately captured in the
stacktrace.

To solve these issues for assembly functions, we must move fixups into
the body of the functions themselves, after the usual fast-path returns.
This patch does so for __arch_copy_to_user().

Inline assembly will be dealt with in subsequent patches.

Other than the improved backtracing, there should be no functional
change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Robin Murphy <robin.murphy@arm.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Mark Brown <broonie@kernel.org>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20211019160219.5202-4-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64: lib: __arch_copy_from_user(): fold fixups into body
Mark Rutland [Tue, 19 Oct 2021 16:02:08 +0000 (17:02 +0100)]
arm64: lib: __arch_copy_from_user(): fold fixups into body

Like other functions, __arch_copy_from_user() places its exception
fixups in the `.fixup` section without any clear association with
__arch_copy_from_user() itself. If we backtrace the fixup code, it will
be symbolized as an offset from the nearest prior symbol, which happens
to be `__entry_tramp_text_end`. Further, since the PC adjustment for the
fixup is akin to a direct branch rather than a function call,
__arch_copy_from_user() itself will be missing from the backtrace.

This is confusing and hinders debugging. In general this pattern will
also be problematic for CONFIG_LIVEPATCH, since fixups often return to
their associated function, but this isn't accurately captured in the
stacktrace.

To solve these issues for assembly functions, we must move fixups into
the body of the functions themselves, after the usual fast-path returns.
This patch does so for __arch_copy_from_user().

Inline assembly will be dealt with in subsequent patches.

Other than the improved backtracing, there should be no functional
change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Robin Murphy <robin.murphy@arm.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Mark Brown <broonie@kernel.org>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20211019160219.5202-3-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64: lib: __arch_clear_user(): fold fixups into body
Mark Rutland [Tue, 19 Oct 2021 16:02:07 +0000 (17:02 +0100)]
arm64: lib: __arch_clear_user(): fold fixups into body

Like other functions, __arch_clear_user() places its exception fixups in
the `.fixup` section without any clear association with
__arch_clear_user() itself. If we backtrace the fixup code, it will be
symbolized as an offset from the nearest prior symbol, which happens to
be `__entry_tramp_text_end`. Further, since the PC adjustment for the
fixup is akin to a direct branch rather than a function call,
__arch_clear_user() itself will be missing from the backtrace.

This is confusing and hinders debugging. In general this pattern will
also be problematic for CONFIG_LIVEPATCH, since fixups often return to
their associated function, but this isn't accurately captured in the
stacktrace.

To solve these issues for assembly functions, we must move fixups into
the body of the functions themselves, after the usual fast-path returns.
This patch does so for __arch_clear_user().

Inline assembly will be dealt with in subsequent patches.

Other than the improved backtracing, there should be no functional
change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Robin Murphy <robin.murphy@arm.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Mark Brown <broonie@kernel.org>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20211019160219.5202-2-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64: Add HWCAP for self-synchronising virtual counter
Marc Zyngier [Sun, 17 Oct 2021 12:42:25 +0000 (13:42 +0100)]
arm64: Add HWCAP for self-synchronising virtual counter

Since userspace can make use of the CNTVSS_EL0 instruction, expose
it via a HWCAP.

Suggested-by: Will Deacon <will@kernel.org>
Acked-by: Will Deacon <will@kernel.org>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20211017124225.3018098-18-maz@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64: Add handling of CNTVCTSS traps
Marc Zyngier [Sun, 17 Oct 2021 12:42:24 +0000 (13:42 +0100)]
arm64: Add handling of CNTVCTSS traps

Since CNTVCTSS obey the same control bits as CNTVCT, add the necessary
decoding to the hook table. Note that there is no known user of
this at the moment.

Acked-by: Will Deacon <will@kernel.org>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20211017124225.3018098-17-maz@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64: Add CNT{P,V}CTSS_EL0 alternatives to cnt{p,v}ct_el0
Marc Zyngier [Sun, 17 Oct 2021 12:42:23 +0000 (13:42 +0100)]
arm64: Add CNT{P,V}CTSS_EL0 alternatives to cnt{p,v}ct_el0

CNTPCTSS_EL0 and CNTVCTSS_EL0 are alternatives to the usual
CNTPCT_EL0 and CNTVCT_EL0 that do not require a previous ISB
to be synchronised (SS stands for Self-Synchronising).

Use the ARM64_HAS_ECV capability to control alternative sequences
that switch to these low(er)-cost primitives. Note that the
counter access in the VDSO is for now left alone until we decide
whether we want to allow this.

Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20211017124225.3018098-16-maz@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64: Add a capability for FEAT_ECV
Marc Zyngier [Sun, 17 Oct 2021 12:42:22 +0000 (13:42 +0100)]
arm64: Add a capability for FEAT_ECV

Add a new capability to detect the Enhanced Counter Virtualization
feature (FEAT_ECV).

Reviewed-by: Oliver Upton <oupton@google.com>
Acked-by: Will Deacon <will@kernel.org>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20211017124225.3018098-15-maz@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoMerge branch 'timers/drivers/armv8.6_arch_timer' of https://git.linaro.org/people...
Will Deacon [Tue, 19 Oct 2021 09:52:11 +0000 (10:52 +0100)]
Merge branch 'timers/drivers/armv8.6_arch_timer' of https://git.linaro.org/people/daniel.lezcano/linux into for-next/8.6-timers

Pull Arm architected timer driver rework from Marc (via Daniel) so that
we can add the Armv8.6 support on top.

Link: https://lore.kernel.org/r/d0c55386-2f7f-a940-45bb-d80ae5e0f378@linaro.org
* 'timers/drivers/armv8.6_arch_timer' of https://git.linaro.org/people/daniel.lezcano/linux:
  clocksource/drivers/arch_arm_timer: Move workaround synchronisation around
  clocksource/drivers/arm_arch_timer: Fix masking for high freq counters
  clocksource/drivers/arm_arch_timer: Drop unnecessary ISB on CVAL programming
  clocksource/drivers/arm_arch_timer: Remove any trace of the TVAL programming interface
  clocksource/drivers/arm_arch_timer: Work around broken CVAL implementations
  clocksource/drivers/arm_arch_timer: Advertise 56bit timer to the core code
  clocksource/drivers/arm_arch_timer: Move MMIO timer programming over to CVAL
  clocksource/drivers/arm_arch_timer: Fix MMIO base address vs callback ordering issue
  clocksource/drivers/arm_arch_timer: Move drop _tval from erratum function names
  clocksource/drivers/arm_arch_timer: Move system register timer programming over to CVAL
  clocksource/drivers/arm_arch_timer: Extend write side of timer register accessors to u64
  clocksource/drivers/arm_arch_timer: Drop CNT*_TVAL read accessors
  clocksource/arm_arch_timer: Add build-time guards for unhandled register accesses

2 years agoclocksource/drivers/arch_arm_timer: Move workaround synchronisation around
Marc Zyngier [Sun, 17 Oct 2021 12:42:21 +0000 (13:42 +0100)]
clocksource/drivers/arch_arm_timer: Move workaround synchronisation around

We currently handle synchronisation when workarounds are enabled
by having an ISB in the __arch_counter_get_cnt?ct_stable() helpers.

While this works, this prevents us from relaxing this synchronisation.

Instead, move it closer to the point where the synchronisation is
actually needed. Further patches will subsequently relax this.

Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20211017124225.3018098-14-maz@kernel.org
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
2 years agoclocksource/drivers/arm_arch_timer: Fix masking for high freq counters
Oliver Upton [Sun, 17 Oct 2021 12:42:20 +0000 (13:42 +0100)]
clocksource/drivers/arm_arch_timer: Fix masking for high freq counters

Unfortunately, the architecture provides no means to determine the bit
width of the system counter. However, we do know the following from the
specification:

 - the system counter is at least 56 bits wide
 - Roll-over time of not less than 40 years

To date, the arch timer driver has depended on the first property,
assuming any system counter to be 56 bits wide and masking off the rest.
However, combining a narrow clocksource mask with a high frequency
counter could result in prematurely wrapping the system counter by a
significant margin. For example, a 56 bit wide, 1GHz system counter
would wrap in a mere 2.28 years!

This is a problem for two reasons: v8.6+ implementations are required to
provide a 64 bit, 1GHz system counter. Furthermore, before v8.6,
implementers may select a counter frequency of their choosing.

Fix the issue by deriving a valid clock mask based on the second
property from above. Set the floor at 56 bits, since we know no system
counter is narrower than that.

[maz: fixed width computation not to lose the last bit, added
      max delta generation for the timer]

Suggested-by: Marc Zyngier <maz@kernel.org>
Signed-off-by: Oliver Upton <oupton@google.com>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20210807191428.3488948-1-oupton@google.com
Link: https://lore.kernel.org/r/20211017124225.3018098-13-maz@kernel.org
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
2 years agoclocksource/drivers/arm_arch_timer: Drop unnecessary ISB on CVAL programming
Marc Zyngier [Sun, 17 Oct 2021 12:42:19 +0000 (13:42 +0100)]
clocksource/drivers/arm_arch_timer: Drop unnecessary ISB on CVAL programming

Switching from TVAL to CVAL has a small drawback: we need an ISB
before reading the counter. We cannot get rid of it, but we can
instead remove the one that comes just after writing to CVAL.

This reduces the number of ISBs from 3 to 2 when programming
the timer.

Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20211017124225.3018098-12-maz@kernel.org
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
2 years agoclocksource/drivers/arm_arch_timer: Remove any trace of the TVAL programming interface
Marc Zyngier [Sun, 17 Oct 2021 12:42:18 +0000 (13:42 +0100)]
clocksource/drivers/arm_arch_timer: Remove any trace of the TVAL programming interface

TVAL usage is now long gone, get rid of the leftovers.

Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20211017124225.3018098-11-maz@kernel.org
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
2 years agoclocksource/drivers/arm_arch_timer: Work around broken CVAL implementations
Marc Zyngier [Sun, 17 Oct 2021 12:42:17 +0000 (13:42 +0100)]
clocksource/drivers/arm_arch_timer: Work around broken CVAL implementations

The Applied Micro XGene-1 SoC has a busted implementation of the
CVAL register: it looks like it is based on TVAL instead of the
other way around. The net effect of this implementation blunder
is that the maximum deadline you can program in the timer is
32bit wide.

Use a MIDR check to notice the broken CPU, and reduce the width
of the timer to 32bit.

Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20211017124225.3018098-10-maz@kernel.org
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
2 years agoclocksource/drivers/arm_arch_timer: Advertise 56bit timer to the core code
Marc Zyngier [Sun, 17 Oct 2021 12:42:16 +0000 (13:42 +0100)]
clocksource/drivers/arm_arch_timer: Advertise 56bit timer to the core code

Proudly tell the code code that we have a timer able to handle
56 bits deltas.

Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20211017124225.3018098-9-maz@kernel.org
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
2 years agoclocksource/drivers/arm_arch_timer: Move MMIO timer programming over to CVAL
Marc Zyngier [Sun, 17 Oct 2021 12:42:15 +0000 (13:42 +0100)]
clocksource/drivers/arm_arch_timer: Move MMIO timer programming over to CVAL

Similarily to the sysreg-based timer, move the MMIO over to using
the CVAL registers instead of TVAL. Note that there is no warranty
that the 64bit MMIO access will be atomic, but the timer is always
disabled at the point where we program CVAL.

Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20211017124225.3018098-8-maz@kernel.org
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
2 years agoclocksource/drivers/arm_arch_timer: Fix MMIO base address vs callback ordering issue
Marc Zyngier [Sun, 17 Oct 2021 12:42:14 +0000 (13:42 +0100)]
clocksource/drivers/arm_arch_timer: Fix MMIO base address vs callback ordering issue

The MMIO timer base address gets published after we have registered
the callbacks and the interrupt handler, which is... a bit dangerous.

Fix this by moving the base address publication to the point where
we register the timer, and expose a pointer to the timer structure
itself rather than a naked value.

Reviewed-by: Oliver Upton <oupton@google.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20211017124225.3018098-7-maz@kernel.org
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
2 years agoclocksource/drivers/arm_arch_timer: Move drop _tval from erratum function names
Marc Zyngier [Sun, 17 Oct 2021 12:42:13 +0000 (13:42 +0100)]
clocksource/drivers/arm_arch_timer: Move drop _tval from erratum function names

The '_tval' name in the erratum handling function names doesn't
make much sense anymore (and they were using CVAL the first place).

Drop the _tval tag.

Reviewed-by: Oliver Upton <oupton@google.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20211017124225.3018098-6-maz@kernel.org
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
2 years agoclocksource/drivers/arm_arch_timer: Move system register timer programming over to...
Marc Zyngier [Sun, 17 Oct 2021 12:42:12 +0000 (13:42 +0100)]
clocksource/drivers/arm_arch_timer: Move system register timer programming over to CVAL

In order to cope better with high frequency counters, move the
programming of the timers from the countdown timer (TVAL) over
to the comparator (CVAL).

The programming model is slightly different, as we now need to
read the current counter value to have an absolute deadline
instead of a relative one.

There is a small overhead to this change, which we will address
in the following patches.

Reviewed-by: Oliver Upton <oupton@google.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20211017124225.3018098-5-maz@kernel.org
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
2 years agoclocksource/drivers/arm_arch_timer: Extend write side of timer register accessors...
Marc Zyngier [Sun, 17 Oct 2021 12:42:11 +0000 (13:42 +0100)]
clocksource/drivers/arm_arch_timer: Extend write side of timer register accessors to u64

The various accessors for the timer sysreg and MMIO registers are
currently hardwired to 32bit. However, we are about to introduce
the use of the CVAL registers, which require a 64bit access.

Upgrade the write side of the accessors to take a 64bit value
(the read side is left untouched as we don't plan to ever read
back any of these registers).

No functional change expected.

Reviewed-by: Oliver Upton <oupton@google.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20211017124225.3018098-4-maz@kernel.org
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
2 years agoclocksource/drivers/arm_arch_timer: Drop CNT*_TVAL read accessors
Marc Zyngier [Sun, 17 Oct 2021 12:42:10 +0000 (13:42 +0100)]
clocksource/drivers/arm_arch_timer: Drop CNT*_TVAL read accessors

The arch timer driver never reads the various TVAL registers, only
writes to them. It is thus pointless to provide accessors
for them and to implement errata workarounds.

Drop these read-side accessors, and add a couple of BUG() statements
for the time being. These statements will be removed further down
the line.

Reviewed-by: Oliver Upton <oupton@google.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20211017124225.3018098-3-maz@kernel.org
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
2 years agoclocksource/arm_arch_timer: Add build-time guards for unhandled register accesses
Marc Zyngier [Sun, 17 Oct 2021 12:42:09 +0000 (13:42 +0100)]
clocksource/arm_arch_timer: Add build-time guards for unhandled register accesses

As we are about to change the registers that are used by the driver,
start by adding build-time checks to ensure that we always handle
all registers and access modes.

Suggested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20211017124225.3018098-2-maz@kernel.org
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
2 years agoarm64: ftrace: use function_nocfi for _mcount as well
Sumit Garg [Mon, 11 Oct 2021 12:50:59 +0000 (18:20 +0530)]
arm64: ftrace: use function_nocfi for _mcount as well

Commit 800618f955a9 ("arm64: ftrace: use function_nocfi for ftrace_call")
only fixed address of ftrace_call but address of _mcount needs to be
fixed as well. Use function_nocfi() to get the actual address of _mcount
function as with CONFIG_CFI_CLANG, the compiler replaces function pointers
with jump table addresses which breaks dynamic ftrace as the address of
_mcount is replaced with the address of _mcount.cfi_jt.

With mainline, this won't be a problem since by default
CONFIG_DYNAMIC_FTRACE_WITH_REGS=y with Clang >= 10 as it supports
-fpatchable-function-entry and CFI requires Clang 12 but for consistency
we should add function_nocfi() for _mcount as well.

Signed-off-by: Sumit Garg <sumit.garg@linaro.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Sami Tolvanen <samitolvanen@google.com>
Link: https://lore.kernel.org/r/20211011125059.3378646-1-sumit.garg@linaro.org
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64: asm: setup.h: export common variables
Anders Roxell [Thu, 7 Oct 2021 19:56:01 +0000 (21:56 +0200)]
arm64: asm: setup.h: export common variables

When building the kernel with sparse enabled 'C=1' the following
warnings can be seen:

arch/arm64/kernel/setup.c:58:13: warning: symbol '__fdt_pointer' was not declared. Should it be static?
arch/arm64/kernel/setup.c:84:25: warning: symbol 'boot_args' was not declared. Should it be static?

Rework so the variables are exported, since these two variable are
created and used in setup.c, also used in head.S.

Signed-off-by: Anders Roxell <anders.roxell@linaro.org>
Link: https://lore.kernel.org/r/20211007195601.677474-1-anders.roxell@linaro.org
Signed-off-by: Will Deacon <will@kernel.org>
2 years agokasan: Extend KASAN mode kernel parameter
Vincenzo Frascino [Wed, 6 Oct 2021 15:47:51 +0000 (16:47 +0100)]
kasan: Extend KASAN mode kernel parameter

Architectures supported by KASAN_HW_TAGS can provide an asymmetric mode
of execution. On an MTE enabled arm64 hw for example this can be
identified with the asymmetric tagging mode of execution. In particular,
when such a mode is present, the CPU triggers a fault on a tag mismatch
during a load operation and asynchronously updates a register when a tag
mismatch is detected during a store operation.

Extend the KASAN HW execution mode kernel command line parameter to
support asymmetric mode.

Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Andrey Konovalov <andreyknvl@gmail.com>
Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Andrey Konovalov <andreyknvl@gmail.com>
Link: https://lore.kernel.org/r/20211006154751.4463-6-vincenzo.frascino@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64: mte: Add asymmetric mode support
Vincenzo Frascino [Wed, 6 Oct 2021 15:47:50 +0000 (16:47 +0100)]
arm64: mte: Add asymmetric mode support

MTE provides an asymmetric mode for detecting tag exceptions. In
particular, when such a mode is present, the CPU triggers a fault
on a tag mismatch during a load operation and asynchronously updates
a register when a tag mismatch is detected during a store operation.

Add support for MTE asymmetric mode.

Note: If the CPU does not support MTE asymmetric mode the kernel falls
back on synchronous mode which is the default for kasan=on.

Cc: Will Deacon <will@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Andrey Konovalov <andreyknvl@gmail.com>
Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Andrey Konovalov <andreyknvl@gmail.com>
Link: https://lore.kernel.org/r/20211006154751.4463-5-vincenzo.frascino@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64: mte: CPU feature detection for Asymm MTE
Vincenzo Frascino [Wed, 6 Oct 2021 15:47:49 +0000 (16:47 +0100)]
arm64: mte: CPU feature detection for Asymm MTE

Add the cpufeature entries to detect the presence of Asymmetric MTE.

Note: The tag checking mode is initialized via cpu_enable_mte() ->
kasan_init_hw_tags() hence to enable it we require asymmetric mode
to be at least on the boot CPU. If the boot CPU does not have it, it is
fine for late CPUs to have it as long as the feature is not enabled
(ARM64_CPUCAP_BOOT_CPU_FEATURE).

Cc: Will Deacon <will@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Suzuki K Poulose <Suzuki.Poulose@arm.com>
Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Link: https://lore.kernel.org/r/20211006154751.4463-4-vincenzo.frascino@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64: mte: Bitfield definitions for Asymm MTE
Vincenzo Frascino [Wed, 6 Oct 2021 15:47:48 +0000 (16:47 +0100)]
arm64: mte: Bitfield definitions for Asymm MTE

Add Asymmetric Memory Tagging Extension bitfield definitions.

Cc: Will Deacon <will@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20211006154751.4463-3-vincenzo.frascino@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 years agokasan: Remove duplicate of kasan_flag_async
Vincenzo Frascino [Wed, 6 Oct 2021 15:47:47 +0000 (16:47 +0100)]
kasan: Remove duplicate of kasan_flag_async

After merging async mode for KASAN_HW_TAGS a duplicate of the
kasan_flag_async flag was left erroneously inside the code.

Remove the duplicate.

Note: This change does not bring functional changes to the code
base.

Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Marco Elver <elver@google.com>
Cc: Evgenii Stepanov <eugenis@google.com>
Cc: Andrey Konovalov <andreyknvl@gmail.com>
Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Andrey Konovalov <andreyknvl@gmail.com>
Link: https://lore.kernel.org/r/20211006154751.4463-2-vincenzo.frascino@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoselftests: arm64: Add coverage of ptrace flags for SVE VL inheritance
Mark Brown [Tue, 5 Oct 2021 12:35:37 +0000 (13:35 +0100)]
selftests: arm64: Add coverage of ptrace flags for SVE VL inheritance

Add a test that covers enabling and disabling of SVE vector length
inheritance via the ptrace interface.

Signed-off-by: Mark Brown <broonie@kernel.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20211005123537.976795-1-broonie@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
2 years agodrivers/perf: Improve build test coverage
John Garry [Fri, 1 Oct 2021 10:48:46 +0000 (18:48 +0800)]
drivers/perf: Improve build test coverage

Improve build test cover by allowing some drivers to build under
COMPILE_TEST where possible.

Some notes:
- Mostly a dependency on CONFIG_ACPI is not really required for only
  building (but left untouched), but is required for TX2 which uses ACPI
  functions which have no stubs
- XGENE required 64b dependency as it relies on some unsigned long perf
  struct fields being 64b
- I don't see why TX2 requires NUMA to build, but left untouched
- Added an explicit dependency on GENERIC_MSI_IRQ_DOMAIN for
  ARM_SMMU_V3_PMU, which is required for platform MSI functions

Signed-off-by: John Garry <john.garry@huawei.com>
Link: https://lore.kernel.org/r/1633085326-156653-3-git-send-email-john.garry@huawei.com
Signed-off-by: Will Deacon <will@kernel.org>
2 years agodrivers/perf: thunderx2_pmu: Change data in size tx2_uncore_event_update()
John Garry [Fri, 1 Oct 2021 10:48:45 +0000 (18:48 +0800)]
drivers/perf: thunderx2_pmu: Change data in size tx2_uncore_event_update()

A LSL of 32 requires > 32b value to hold the result. However in
tx2_uncore_event_update(), 1UL << 32 currently only works as unsigned
long is 64b on a 64b system.

If we want to compile test for a 32b system, we need unsigned long long,
whose min size is 64b.

Signed-off-by: John Garry <john.garry@huawei.com>
Link: https://lore.kernel.org/r/1633085326-156653-2-git-send-email-john.garry@huawei.com
Signed-off-by: Will Deacon <will@kernel.org>
2 years agodrivers/perf: hisi: Fix PA PMU counter offset
Shaokun Zhang [Tue, 28 Sep 2021 12:30:22 +0000 (20:30 +0800)]
drivers/perf: hisi: Fix PA PMU counter offset

The PA PMU counter offset was correct in [1] and the driver has
already been verified. We want to keep the register offset using
lower case character in later version that is consistent with
the existed driver. Since there was no functional change, we
didn't do more test. However there is typo when modified the PA
PMU counter offset by mistake, so fix this bad mistake.

[1] https://www.spinics.net/lists/arm-kernel/msg865263.html

Cc: Will Deacon <will@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: John Garry <john.garry@huawei.com>
Cc: Qi Liu <liuqi115@huawei.com>
Signed-off-by: Shaokun Zhang <zhangshaokun@hisilicon.com>
Link: https://lore.kernel.org/r/20210928123022.23467-1-zhangshaokun@hisilicon.com
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64/mm: drop HAVE_ARCH_PFN_VALID
Anshuman Khandual [Thu, 30 Sep 2021 01:30:39 +0000 (04:30 +0300)]
arm64/mm: drop HAVE_ARCH_PFN_VALID

CONFIG_SPARSEMEM_VMEMMAP is now the only available memory model on arm64
platforms and free_unused_memmap() would just return without creating any
holes in the memmap mapping.  There is no need for any special handling in
pfn_valid() and HAVE_ARCH_PFN_VALID can just be dropped.  This also moves
the pfn upper bits sanity check into generic pfn_valid().

[rppt: rebased on v5.15-rc3]

Link: https://lkml.kernel.org/r/1621947349-25421-1-git-send-email-anshuman.khandual@arm.com
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
Acked-by: David Hildenbrand <david@redhat.com>
Acked-by: Mike Rapoport <rppt@linux.ibm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: David Hildenbrand <david@redhat.com>
Cc: Mike Rapoport <rppt@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Link: https://lore.kernel.org/r/20210930013039.11260-3-rppt@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
2 years agodma-mapping: remove bogus test for pfn_valid from dma_map_resource
Mike Rapoport [Thu, 30 Sep 2021 01:30:38 +0000 (04:30 +0300)]
dma-mapping: remove bogus test for pfn_valid from dma_map_resource

dma_map_resource() uses pfn_valid() to ensure the range is not RAM.
However, pfn_valid() only checks for availability of the memory map for a
PFN but it does not ensure that the PFN is actually backed by RAM.

As dma_map_resource() is the only method in DMA mapping APIs that has this
check, simply drop the pfn_valid() test from dma_map_resource().

Link: https://lore.kernel.org/all/20210824173741.GC623@arm.com/
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Acked-by: David Hildenbrand <david@redhat.com>
Link: https://lore.kernel.org/r/20210930013039.11260-2-rppt@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64: trans_pgd: remove trans_pgd_map_page()
Pasha Tatashin [Thu, 30 Sep 2021 14:31:13 +0000 (14:31 +0000)]
arm64: trans_pgd: remove trans_pgd_map_page()

The intend of trans_pgd_map_page() was to map contiguous range of VA
memory to the memory that is getting relocated during kexec. However,
since we are now using linear map instead of contiguous range this
function is not needed

Suggested-by: Pingfan Liu <kernelfans@gmail.com>
Signed-off-by: Pasha Tatashin <pasha.tatashin@soleen.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20210930143113.1502553-16-pasha.tatashin@soleen.com
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64: kexec: remove cpu-reset.h
Pasha Tatashin [Thu, 30 Sep 2021 14:31:12 +0000 (14:31 +0000)]
arm64: kexec: remove cpu-reset.h

This header contains only cpu_soft_restart() which is never used directly
anymore. So, remove this header, and rename the helper to be
cpu_soft_restart().

Suggested-by: James Morse <james.morse@arm.com>
Signed-off-by: Pasha Tatashin <pasha.tatashin@soleen.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20210930143113.1502553-15-pasha.tatashin@soleen.com
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64: kexec: remove the pre-kexec PoC maintenance
Pasha Tatashin [Thu, 30 Sep 2021 14:31:11 +0000 (14:31 +0000)]
arm64: kexec: remove the pre-kexec PoC maintenance

Now that kexec does its relocations with the MMU enabled, we no longer
need to clean the relocation data to the PoC.

Suggested-by: James Morse <james.morse@arm.com>
Signed-off-by: Pasha Tatashin <pasha.tatashin@soleen.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20210930143113.1502553-14-pasha.tatashin@soleen.com
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64: kexec: keep MMU enabled during kexec relocation
Pasha Tatashin [Thu, 30 Sep 2021 14:31:10 +0000 (14:31 +0000)]
arm64: kexec: keep MMU enabled during kexec relocation

Now, that we have linear map page tables configured, keep MMU enabled
to allow faster relocation of segments to final destination.

Cavium ThunderX2:
Kernel Image size: 38M Iniramfs size: 46M Total relocation size: 84M
MMU-disabled:
relocation 7.489539915s
MMU-enabled:
relocation 0.03946095s

Broadcom Stingray:
The performance data: for a moderate size kernel + initramfs: 25M the
relocation was taking 0.382s, with enabled MMU it now takes
0.019s only or x20 improvement.

The time is proportional to the size of relocation, therefore if initramfs
is larger, 100M it could take over a second.

Signed-off-by: Pasha Tatashin <pasha.tatashin@soleen.com>
Tested-by: Pingfan Liu <piliu@redhat.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20210930143113.1502553-13-pasha.tatashin@soleen.com
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64: kexec: install a copy of the linear-map
Pasha Tatashin [Thu, 30 Sep 2021 14:31:09 +0000 (14:31 +0000)]
arm64: kexec: install a copy of the linear-map

To perform the kexec relocation with the MMU enabled, we need a copy
of the linear map.

Create one, and install it from the relocation code. This has to be done
from the assembly code as it will be idmapped with TTBR0. The kernel
runs in TTRB1, so can't use the break-before-make sequence on the mapping
it is executing from.

The makes no difference yet as the relocation code runs with the MMU
disabled.

Suggested-by: James Morse <james.morse@arm.com>
Signed-off-by: Pasha Tatashin <pasha.tatashin@soleen.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20210930143113.1502553-12-pasha.tatashin@soleen.com
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64: kexec: use ld script for relocation function
Pasha Tatashin [Thu, 30 Sep 2021 14:31:08 +0000 (14:31 +0000)]
arm64: kexec: use ld script for relocation function

Currently, relocation code declares start and end variables
which are used to compute its size.

The better way to do this is to use ld script, and put relocation
function in its own section.

Signed-off-by: Pasha Tatashin <pasha.tatashin@soleen.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20210930143113.1502553-11-pasha.tatashin@soleen.com
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64: kexec: relocate in EL1 mode
Pasha Tatashin [Thu, 30 Sep 2021 14:31:07 +0000 (14:31 +0000)]
arm64: kexec: relocate in EL1 mode

Since we are going to keep MMU enabled during relocation, we need to
keep EL1 mode throughout the relocation.

Keep EL1 enabled, and switch EL2 only before entering the new world.

Suggested-by: James Morse <james.morse@arm.com>
Signed-off-by: Pasha Tatashin <pasha.tatashin@soleen.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20210930143113.1502553-10-pasha.tatashin@soleen.com
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64: kexec: configure EL2 vectors for kexec
Pasha Tatashin [Thu, 30 Sep 2021 14:31:06 +0000 (14:31 +0000)]
arm64: kexec: configure EL2 vectors for kexec

If we have a EL2 mode without VHE, the EL2 vectors are needed in order
to switch to EL2 and jump to new world with hypervisor privileges.

In preparation to MMU enabled relocation, configure our EL2 table now.

Kexec uses #HVC_SOFT_RESTART to branch to the new world, so extend
el1_sync vector that is provided by trans_pgd_copy_el2_vectors() to
support this case.

Signed-off-by: Pasha Tatashin <pasha.tatashin@soleen.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20210930143113.1502553-9-pasha.tatashin@soleen.com
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64: kexec: pass kimage as the only argument to relocation function
Pasha Tatashin [Thu, 30 Sep 2021 14:31:05 +0000 (14:31 +0000)]
arm64: kexec: pass kimage as the only argument to relocation function

Currently, kexec relocation function (arm64_relocate_new_kernel) accepts
the following arguments:

head: start of array that contains relocation information.
entry: entry point for new kernel or purgatory.
dtb_mem: first and only argument to entry.

The number of arguments cannot be easily expended, because this
function is also called from HVC_SOFT_RESTART, which preserves only
three arguments. And, also arm64_relocate_new_kernel is written in
assembly but called without stack, thus no place to move extra arguments
to free registers.

Soon, we will need to pass more arguments: once we enable MMU we
will need to pass information about page tables.

Pass kimage to arm64_relocate_new_kernel, and teach it to get the
required fields from kimage.

Signed-off-by: Pasha Tatashin <pasha.tatashin@soleen.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20210930143113.1502553-8-pasha.tatashin@soleen.com
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64: kexec: Use dcache ops macros instead of open-coding
Pasha Tatashin [Thu, 30 Sep 2021 14:31:04 +0000 (14:31 +0000)]
arm64: kexec: Use dcache ops macros instead of open-coding

kexec does dcache maintenance when it re-writes all memory. Our
dcache_by_line_op macro depends on reading the sanitized DminLine
from memory. Kexec may have overwritten this, so open-codes the
sequence.

dcache_by_line_op is a whole set of macros, it uses dcache_line_size
which uses read_ctr for the sanitsed DminLine. Reading the DminLine
is the first thing the dcache_by_line_op does.

Rename dcache_by_line_op dcache_by_myline_op and take DminLine as
an argument. Kexec can now use the slightly smaller macro.

This makes up-coming changes to the dcache maintenance easier on
the eye.

Code generated by the existing callers is unchanged.

Suggested-by: James Morse <james.morse@arm.com>
Signed-off-by: Pasha Tatashin <pasha.tatashin@soleen.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20210930143113.1502553-7-pasha.tatashin@soleen.com
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64: kexec: skip relocation code for inplace kexec
Pasha Tatashin [Thu, 30 Sep 2021 14:31:03 +0000 (14:31 +0000)]
arm64: kexec: skip relocation code for inplace kexec

In case of kdump or when segments are already in place the relocation
is not needed, therefore the setup of relocation function and call to
it can be skipped.

Signed-off-by: Pasha Tatashin <pasha.tatashin@soleen.com>
Suggested-by: James Morse <james.morse@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20210930143113.1502553-6-pasha.tatashin@soleen.com
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64: kexec: flush image and lists during kexec load time
Pasha Tatashin [Thu, 30 Sep 2021 14:31:02 +0000 (14:31 +0000)]
arm64: kexec: flush image and lists during kexec load time

Currently, during kexec load we are copying relocation function and
flushing it. However, we can also flush kexec relocation buffers and
if new kernel image is already in place (i.e. crash kernel), we can
also flush the new kernel image itself.

Signed-off-by: Pasha Tatashin <pasha.tatashin@soleen.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20210930143113.1502553-5-pasha.tatashin@soleen.com
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64: hibernate: abstract ttrb0 setup function
Pasha Tatashin [Thu, 30 Sep 2021 14:31:01 +0000 (14:31 +0000)]
arm64: hibernate: abstract ttrb0 setup function

Currently, only hibernate sets custom ttbr0 with safe idmaped function.
Kexec, is also going to be using this functionality when relocation code
is going to be idmapped.

Move the setup sequence to a dedicated cpu_install_ttbr0() for custom
ttbr0.

Suggested-by: James Morse <james.morse@arm.com>
Signed-off-by: Pasha Tatashin <pasha.tatashin@soleen.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20210930143113.1502553-4-pasha.tatashin@soleen.com
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64: trans_pgd: hibernate: Add trans_pgd_copy_el2_vectors
Pasha Tatashin [Thu, 30 Sep 2021 14:31:00 +0000 (14:31 +0000)]
arm64: trans_pgd: hibernate: Add trans_pgd_copy_el2_vectors

Users of trans_pgd may also need a copy of vector table because it is
also may be overwritten if a linear map can be overwritten.

Move setup of EL2 vectors from hibernate to trans_pgd, so it can be
later shared with kexec as well.

Signed-off-by: Pasha Tatashin <pasha.tatashin@soleen.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20210930143113.1502553-3-pasha.tatashin@soleen.com
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64: kernel: add helper for booted at EL2 and not VHE
Pasha Tatashin [Thu, 30 Sep 2021 14:30:59 +0000 (14:30 +0000)]
arm64: kernel: add helper for booted at EL2 and not VHE

Replace places that contain logic like this:
is_hyp_mode_available() && !is_kernel_in_hyp_mode()

With a dedicated boolean function  is_hyp_nvhe(). This will be needed
later in kexec in order to sooner switch back to EL2.

Suggested-by: James Morse <james.morse@arm.com>
Signed-off-by: Pasha Tatashin <pasha.tatashin@soleen.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20210930143113.1502553-2-pasha.tatashin@soleen.com
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64: kasan: mte: move GCR_EL1 switch to task switch when KASAN disabled
Peter Collingbourne [Fri, 24 Sep 2021 01:06:55 +0000 (18:06 -0700)]
arm64: kasan: mte: move GCR_EL1 switch to task switch when KASAN disabled

It is not necessary to write to GCR_EL1 on every kernel entry and
exit when HW tag-based KASAN is disabled because the kernel will not
execute any IRG instructions in that mode. Since accessing GCR_EL1
can be expensive on some microarchitectures, avoid doing so by moving
the access to task switch when HW tag-based KASAN is disabled.

Signed-off-by: Peter Collingbourne <pcc@google.com>
Acked-by: Andrey Konovalov <andreyknvl@gmail.com>
Link: https://linux-review.googlesource.com/id/I78e90d60612a94c24344526f476ac4ff216e10d2
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20210924010655.2886918-1-pcc@google.com
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64: mm: update max_pfn after memory hotplug
Sudarshan Rajagopalan [Tue, 28 Sep 2021 18:51:49 +0000 (11:51 -0700)]
arm64: mm: update max_pfn after memory hotplug

After new memory blocks have been hotplugged, max_pfn and max_low_pfn
needs updating to reflect on new PFNs being hot added to system.
Without this patch, debug-related functions that use max_pfn such as
get_max_dump_pfn() or read_page_owner() will not work with any page in
memory that is hot-added after boot.

Fixes: 4ab215061554 ("arm64: Add memory hotplug support")
Signed-off-by: Sudarshan Rajagopalan <quic_sudaraja@quicinc.com>
Signed-off-by: Chris Goldsworthy <quic_cgoldswo@quicinc.com>
Acked-by: David Hildenbrand <david@redhat.com>
Cc: Florian Fainelli <f.fainelli@gmail.com>
Cc: Georgi Djakov <quic_c_gdjako@quicinc.com>
Tested-by: Georgi Djakov <quic_c_gdjako@quicinc.com>
Link: https://lore.kernel.org/r/a51a27ee7be66024b5ce626310d673f24107bcb8.1632853776.git.quic_cgoldswo@quicinc.com
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64/mm: Add pud_sect_supported()
Anshuman Khandual [Mon, 20 Sep 2021 09:29:31 +0000 (14:59 +0530)]
arm64/mm: Add pud_sect_supported()

Section mapping at PUD level is supported only on 4K pages and currently it
gets verified with explicit #ifdef or IS_ENABLED() constructs. This adds a
new helper pud_sect_supported() for this purpose, which particularly cleans
up the HugeTLB code path. It updates relevant switch statements with checks
for __PAGETABLE_PMD_FOLDED in order to avoid build failures caused with two
identical switch case values in those code blocks.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kernel@vger.kernel.org
Suggested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/1632130171-472-1-git-send-email-anshuman.khandual@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64/traps: Avoid unnecessary kernel/user pointer conversion
Amit Daniel Kachhap [Fri, 17 Sep 2021 05:58:11 +0000 (11:28 +0530)]
arm64/traps: Avoid unnecessary kernel/user pointer conversion

Annotating a pointer from kernel to __user and then back again requires
an extra __force annotation to silent sparse warning. In call_undef_hook()
this unnecessary complexity can be avoided by modifying the intermediate
user pointer to unsigned long.

This way there is no inter-changeable use of user and kernel pointers
and the code is consistent.

Note: This patch adds no functional changes to code.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20210917055811.22341-1-amit.kachhap@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoselftests: arm64: Verify that all possible vector lengths are handled
Mark Brown [Wed, 29 Sep 2021 15:19:25 +0000 (16:19 +0100)]
selftests: arm64: Verify that all possible vector lengths are handled

As part of the enumeration interface for setting vector lengths it is valid
to set vector lengths not supported in the system, these will be rounded to
a supported vector length and returned from the prctl(). Add a test which
exercises this for every valid vector length and makes sure that the return
value is as expected and that this is reflected in the actual system state.

Signed-off-by: Mark Brown <broonie@kernel.org>
Reviewed-by: Tomohiro Misono <misono.tomohiro@jp.fujitsu.com>
Link: https://lore.kernel.org/r/20210929151925.9601-5-broonie@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoselftests: arm64: Fix and enable test for setting current VL in vec-syscfg
Mark Brown [Wed, 29 Sep 2021 15:19:24 +0000 (16:19 +0100)]
selftests: arm64: Fix and enable test for setting current VL in vec-syscfg

We had some test code for verifying that we can write the current VL via
the prctl() interface but the condition for the test was inverted which
wasn't noticed as it was never actually hooked up to the array of tests
we execute. Fix this.

Signed-off-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20210929151925.9601-4-broonie@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoselftests: arm64: Remove bogus error check on writing to files
Mark Brown [Wed, 29 Sep 2021 15:19:23 +0000 (16:19 +0100)]
selftests: arm64: Remove bogus error check on writing to files

Due to some refactoring with the error handling we ended up mangling things
so we never actually set ret and therefore shouldn't be checking it.

Signed-off-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20210929151925.9601-3-broonie@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoselftests: arm64: Fix printf() format mismatch in vec-syscfg
Mark Brown [Wed, 29 Sep 2021 15:19:22 +0000 (16:19 +0100)]
selftests: arm64: Fix printf() format mismatch in vec-syscfg

The format for this error message calls for the plain text version of the
error but we weren't supply it.

Signed-off-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20210929151925.9601-2-broonie@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoselftests: arm64: Move FPSIMD in SVE ptrace test into a function
Mark Brown [Mon, 13 Sep 2021 12:55:05 +0000 (13:55 +0100)]
selftests: arm64: Move FPSIMD in SVE ptrace test into a function

Now that all the other tests are in functions rather than inline in the
main parent process function also move the test for accessing the FPSIMD
registers via the SVE regset out into their own function.

Signed-off-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20210913125505.52619-9-broonie@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoselftests: arm64: More comprehensively test the SVE ptrace interface
Mark Brown [Mon, 13 Sep 2021 12:55:04 +0000 (13:55 +0100)]
selftests: arm64: More comprehensively test the SVE ptrace interface

Currently the selftest for the SVE register set is not quite as thorough
as is desirable - it only validates that the value of a single Z register
is not modified by a partial write to a lower numbered Z register after
having previously been set through the FPSIMD regset.

Make this more thorough:
 - Test the ability to set vector lengths and enumerate those supported in
   the system.
 - Validate data in all Z and P registers, plus FPSR and FPCR.
 - Test reads via the FPSIMD regset after set via the SVE regset.

There's still some oversights, the main one being that due to the need to
generate a pattern in FFR and the fact that this rewrite is primarily
motivated by SME's streaming SVE which doesn't have FFR we don't currently
test FFR. Update the TODO to reflect those that occurred to me (and fix an
adjacent typo in there).

Signed-off-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20210913125505.52619-8-broonie@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoselftests: arm64: Verify interoperation of SVE and FPSIMD register sets
Mark Brown [Mon, 13 Sep 2021 12:55:03 +0000 (13:55 +0100)]
selftests: arm64: Verify interoperation of SVE and FPSIMD register sets

After setting the FPSIMD registers via the SVE register set read them back
via the FPSIMD register set, validating that the two register sets are
interoperating and that the values we thought we set made it into the
child process.

Signed-off-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20210913125505.52619-7-broonie@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoselftests: arm64: Clarify output when verifying SVE register set
Mark Brown [Mon, 13 Sep 2021 12:55:02 +0000 (13:55 +0100)]
selftests: arm64: Clarify output when verifying SVE register set

When verifying setting a Z register via ptrace we check each byte by hand,
iterating over the buffer using a pointer called p and treating each
register value written as a test. This creates output referring to "p[X]"
which is confusing since SVE also has predicate registers Pn. Tweak the
output to avoid confusion here.

Signed-off-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20210913125505.52619-6-broonie@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoselftests: arm64: Document what the SVE ptrace test is doing
Mark Brown [Mon, 13 Sep 2021 12:55:01 +0000 (13:55 +0100)]
selftests: arm64: Document what the SVE ptrace test is doing

Before we go modifying it further let's add some comments and output
clarifications explaining what this test is actually doing.

Signed-off-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20210913125505.52619-5-broonie@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoselftests: arm64: Remove extraneous register setting code
Mark Brown [Mon, 13 Sep 2021 12:55:00 +0000 (13:55 +0100)]
selftests: arm64: Remove extraneous register setting code

For some reason the SVE ptrace test code starts off by setting values in
some of the SVE vector registers in the parent process which it then never
interacts with when verifying the ptrace interfaces. This is not especially
relevant to what's being tested and somewhat confusing when reading the
code so let's remove it.

Signed-off-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20210913125505.52619-4-broonie@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoselftests: arm64: Don't log child creation as a test in SVE ptrace test
Mark Brown [Mon, 13 Sep 2021 12:54:59 +0000 (13:54 +0100)]
selftests: arm64: Don't log child creation as a test in SVE ptrace test

Currently we log the creation of the child process as a test but it's not
really relevant to what we're trying to test and can make the output a
little confusing so don't do that.

Signed-off-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20210913125505.52619-3-broonie@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoselftests: arm64: Use a define for the number of SVE ptrace tests to be run
Mark Brown [Mon, 13 Sep 2021 12:54:58 +0000 (13:54 +0100)]
selftests: arm64: Use a define for the number of SVE ptrace tests to be run

Partly in preparation for future refactoring move from hard coding the
number of tests in main() to putting #define at the top of the source
instead.

Signed-off-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20210913125505.52619-2-broonie@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoarm64: mm: Drop pointless call to set_max_mapnr()
Will Deacon [Wed, 29 Sep 2021 12:54:04 +0000 (13:54 +0100)]
arm64: mm: Drop pointless call to set_max_mapnr()

set_max_mapnr() is an empty stub function if CONFIG_NUMA=y, otherwise it
assigns to the 'max_mapnr' variable which is used to provide a generic
pfn_valid() implementation if CONFIG_MMU=n.

Since we don't support nommu on arm64, drop the pointless call to
set_max_mapnr() from mem_init().

Link: https://lore.kernel.org/r/130a50d7-92fd-31fa-261e-f73dadcb4fcf@redhat.com
Signed-off-by: Will Deacon <will@kernel.org>
2 years agoLinux 5.15-rc3
Linus Torvalds [Sun, 26 Sep 2021 21:08:19 +0000 (14:08 -0700)]
Linux 5.15-rc3

2 years agoMerge tag '5.15-rc2-ksmbd-fixes' of git://git.samba.org/ksmbd
Linus Torvalds [Sun, 26 Sep 2021 19:46:45 +0000 (12:46 -0700)]
Merge tag '5.15-rc2-ksmbd-fixes' of git://git.samba.org/ksmbd

Pull ksmbd fixes from Steve French:
 "Five fixes for the ksmbd kernel server, including three security
  fixes:

   - remove follow symlinks support

   - use LOOKUP_BENEATH to prevent out of share access

   - SMB3 compounding security fix

   - fix for returning the default streams correctly, fixing a bug when
     writing ppt or doc files from some clients

   - logging more clearly that ksmbd is experimental (at module load
     time)"

* tag '5.15-rc2-ksmbd-fixes' of git://git.samba.org/ksmbd:
  ksmbd: use LOOKUP_BENEATH to prevent the out of share access
  ksmbd: remove follow symlinks support
  ksmbd: check protocol id in ksmbd_verify_smb_message()
  ksmbd: add default data stream name in FILE_STREAM_INFORMATION
  ksmbd: log that server is experimental at module load

2 years agoMerge tag 'edac_urgent_for_v5.15_rc3' of git://git.kernel.org/pub/scm/linux/kernel...
Linus Torvalds [Sun, 26 Sep 2021 19:18:10 +0000 (12:18 -0700)]
Merge tag 'edac_urgent_for_v5.15_rc3' of git://git./linux/kernel/git/ras/ras

Pull EDAC fixes from Borislav Petkov:
 "Fix two EDAC drivers using the wrong value type for the DIMM mode"

* tag 'edac_urgent_for_v5.15_rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/ras/ras:
  EDAC/dmc520: Assign the proper type to dimm->edac_mode
  EDAC/synopsys: Fix wrong value type assignment for edac_mode

2 years agoMerge tag 'thermal-v5.15-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/therma...
Linus Torvalds [Sun, 26 Sep 2021 19:11:58 +0000 (12:11 -0700)]
Merge tag 'thermal-v5.15-rc3' of git://git./linux/kernel/git/thermal/linux

Pull thermal fixes from Daniel Lezcano:

 - Fix thermal shutdown after a suspend/resume due to a wrong TCC value
   restored on Intel platform (Antoine Tenart)

 - Fix potential buffer overflow when building the list of policies. The
   buffer size is not updated after writing to it (Dan Carpenter)

 - Fix wrong check against IS_ERR instead of NULL (Ansuel Smith)

* tag 'thermal-v5.15-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/thermal/linux:
  thermal/drivers/tsens: Fix wrong check for tzd in irq handlers
  thermal/core: Potential buffer overflow in thermal_build_list_of_policies()
  thermal/drivers/int340x: Do not set a wrong tcc offset on resume

2 years agoMerge tag 'x86-urgent-2021-09-26' of git://git.kernel.org/pub/scm/linux/kernel/git...
Linus Torvalds [Sun, 26 Sep 2021 17:09:20 +0000 (10:09 -0700)]
Merge tag 'x86-urgent-2021-09-26' of git://git./linux/kernel/git/tip/tip

Pull x86 fixes from Thomas Gleixner:
 "A set of fixes for X86:

   - Prevent sending the wrong signal when protection keys are enabled
     and the kernel handles a fault in the vsyscall emulation.

   - Invoke early_reserve_memory() before invoking e820_memory_setup()
     which is required to make the Xen dom0 e820 hooks work correctly.

   - Use the correct data type for the SETZ operand in the EMQCMDS
     instruction wrapper.

   - Prevent undefined behaviour to the potential unaligned accesss in
     the instruction decoder library"

* tag 'x86-urgent-2021-09-26' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  x86/insn, tools/x86: Fix undefined behavior due to potential unaligned accesses
  x86/asm: Fix SETZ size enqcmds() build failure
  x86/setup: Call early_reserve_memory() earlier
  x86/fault: Fix wrong signal when vsyscall fails with pkey

2 years agoMerge tag 'timers-urgent-2021-09-26' of git://git.kernel.org/pub/scm/linux/kernel...
Linus Torvalds [Sun, 26 Sep 2021 17:00:16 +0000 (10:00 -0700)]
Merge tag 'timers-urgent-2021-09-26' of git://git./linux/kernel/git/tip/tip

Pull timer fix from Thomas Gleixner:
 "A single fix for the recently introduced regression in posix CPU
  timers which failed to stop the timer when requested. That caused
  unexpected signals to be sent to the process/thread causing
  malfunction"

* tag 'timers-urgent-2021-09-26' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  posix-cpu-timers: Prevent spuriously armed 0-value itimer

2 years agoMerge tag 'irq-urgent-2021-09-26' of git://git.kernel.org/pub/scm/linux/kernel/git...
Linus Torvalds [Sun, 26 Sep 2021 16:55:22 +0000 (09:55 -0700)]
Merge tag 'irq-urgent-2021-09-26' of git://git./linux/kernel/git/tip/tip

Pull irq fixes from Thomas Gleixner:
 "A set of fixes for interrupt chip drivers:

   - Work around a bad GIC integration on a Renesas platform which can't
     handle byte-sized MMIO access

   - Plug a potential memory leak in the GICv4 driver

   - Fix a regression in the Armada 370-XP IPI code which was caused by
     issuing EOI instack of ACK.

   - A couple of small fixes here and there"

* tag 'irq-urgent-2021-09-26' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  irqchip/gic: Work around broken Renesas integration
  irqchip/renesas-rza1: Use semicolons instead of commas
  irqchip/gic-v3-its: Fix potential VPE leak on error
  irqchip/goldfish-pic: Select GENERIC_IRQ_CHIP to fix build
  irqchip/mbigen: Repair non-kernel-doc notation
  irqdomain: Change the type of 'size' in __irq_domain_add() to be consistent
  irqchip/armada-370-xp: Fix ack/eoi breakage
  Documentation: Fix irq-domain.rst build warning

2 years agoMerge branch 'akpm' (patches from Andrew)
Linus Torvalds [Sat, 25 Sep 2021 23:20:34 +0000 (16:20 -0700)]
Merge branch 'akpm' (patches from Andrew)

Merge misc fixes from Andrew Morton:
 "16 patches.

  Subsystems affected by this patch series: xtensa, sh, ocfs2, scripts,
  lib, and mm (memory-failure, kasan, damon, shmem, tools, pagecache,
  debug, and pagemap)"

* emailed patches from Andrew Morton <akpm@linux-foundation.org>:
  mm: fix uninitialized use in overcommit_policy_handler
  mm/memory_failure: fix the missing pte_unmap() call
  kasan: always respect CONFIG_KASAN_STACK
  sh: pgtable-3level: fix cast to pointer from integer of different size
  mm/debug: sync up latest migrate_reason to migrate_reason_names
  mm/debug: sync up MR_CONTIG_RANGE and MR_LONGTERM_PIN
  mm: fs: invalidate bh_lrus for only cold path
  lib/zlib_inflate/inffast: check config in C to avoid unused function warning
  tools/vm/page-types: remove dependency on opt_file for idle page tracking
  scripts/sorttable: riscv: fix undeclared identifier 'EM_RISCV' error
  ocfs2: drop acl cache for directories too
  mm/shmem.c: fix judgment error in shmem_is_huge()
  xtensa: increase size of gcc stack frame check
  mm/damon: don't use strnlen() with known-bogus source length
  kasan: fix Kconfig check of CC_HAS_WORKING_NOSANITIZE_ADDRESS
  mm, hwpoison: add is_free_buddy_page() in HWPoisonHandlable()

2 years agoMerge tag 'scsi-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi
Linus Torvalds [Sat, 25 Sep 2021 23:05:56 +0000 (16:05 -0700)]
Merge tag 'scsi-fixes' of git://git./linux/kernel/git/jejb/scsi

Pull SCSI fixes from James Bottomley:
 "Thirty-three fixes, I'm afraid.

  Essentially the build up from the last couple of weeks while I've been
  dealling with Linux Plumbers conference infrastructure issues. It's
  mostly the usual assortment of spelling fixes and minor corrections.

  The only core relevant changes are to the sd driver to reduce the spin
  up message spew and fix a small memory leak on the freeing path"

* tag 'scsi-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi: (33 commits)
  scsi: ses: Retry failed Send/Receive Diagnostic commands
  scsi: target: Fix spelling mistake "CONFLIFT" -> "CONFLICT"
  scsi: lpfc: Fix gcc -Wstringop-overread warning, again
  scsi: lpfc: Use correct scnprintf() limit
  scsi: lpfc: Fix sprintf() overflow in lpfc_display_fpin_wwpn()
  scsi: core: Remove 'current_tag'
  scsi: acornscsi: Remove tagged queuing vestiges
  scsi: fas216: Kill scmd->tag
  scsi: qla2xxx: Restore initiator in dual mode
  scsi: ufs: core: Unbreak the reset handler
  scsi: sd_zbc: Support disks with more than 2**32 logical blocks
  scsi: ufs: core: Revert "scsi: ufs: Synchronize SCSI and UFS error handling"
  scsi: bsg: Fix device unregistration
  scsi: sd: Make sd_spinup_disk() less noisy
  scsi: ufs: ufs-pci: Fix Intel LKF link stability
  scsi: mpt3sas: Clean up some inconsistent indenting
  scsi: megaraid: Clean up some inconsistent indenting
  scsi: sr: Fix spelling mistake "does'nt" -> "doesn't"
  scsi: Remove SCSI CDROM MAINTAINERS entry
  scsi: megaraid: Fix Coccinelle warning
  ...

2 years agoMerge tag 'io_uring-5.15-2021-09-25' of git://git.kernel.dk/linux-block
Linus Torvalds [Sat, 25 Sep 2021 22:51:08 +0000 (15:51 -0700)]
Merge tag 'io_uring-5.15-2021-09-25' of git://git.kernel.dk/linux-block

Pull io_uring fixes from Jens Axboe:
 "This one looks a bit bigger than it is, but that's mainly because 2/3
  of it is enabling IORING_OP_CLOSE to close direct file descriptors.

  We've had a few folks using them and finding it confusing that the way
  to close them is through using -1 for file update, this just brings
  API symmetry for direct descriptors. Hence I think we should just do
  this now and have a better API for 5.15 release. There's some room for
  de-duplicating the close code, but we're leaving that for the next
  merge window.

  Outside of that, just small fixes:

   - Poll race fixes (Hao)

   - io-wq core dump exit fix (me)

   - Reschedule around potentially intensive tctx and buffer iterators
     on teardown (me)

   - Fix for always ending up punting files update to io-wq (me)

   - Put the provided buffer meta data under memcg accounting (me)

   - Tweak for io_write(), removing dead code that was added with the
     iterator changes in this release (Pavel)"

* tag 'io_uring-5.15-2021-09-25' of git://git.kernel.dk/linux-block:
  io_uring: make OP_CLOSE consistent with direct open
  io_uring: kill extra checks in io_write()
  io_uring: don't punt files update to io-wq unconditionally
  io_uring: put provided buffer meta data under memcg accounting
  io_uring: allow conditional reschedule for intensive iterators
  io_uring: fix potential req refcount underflow
  io_uring: fix missing set of EPOLLONESHOT for CQ ring overflow
  io_uring: fix race between poll completion and cancel_hash insertion
  io-wq: ensure we exit if thread group is exiting

2 years agoMerge tag 'block-5.15-2021-09-25' of git://git.kernel.dk/linux-block
Linus Torvalds [Sat, 25 Sep 2021 22:44:05 +0000 (15:44 -0700)]
Merge tag 'block-5.15-2021-09-25' of git://git.kernel.dk/linux-block

Pull block fixes from Jens Axboe:

 - NVMe pull request via Christoph:
      - keep ctrl->namespaces ordered (Christoph Hellwig)
      - fix incorrect h2cdata pdu offset accounting in nvme-tcp (Sagi
        Grimberg)
      - handled updated hw_queues in nvme-fc more carefully (Daniel
        Wagner, James Smart)

 - md lock order fix (Christoph)

 - fallocate locking fix (Ming)

 - blktrace UAF fix (Zhihao)

 - rq-qos bio tracking fix (Ming)

* tag 'block-5.15-2021-09-25' of git://git.kernel.dk/linux-block:
  block: hold ->invalidate_lock in blkdev_fallocate
  blktrace: Fix uaf in blk_trace access after removing by sysfs
  block: don't call rq_qos_ops->done_bio if the bio isn't tracked
  md: fix a lock order reversal in md_alloc
  nvme: keep ctrl->namespaces ordered
  nvme-tcp: fix incorrect h2cdata pdu offset accounting
  nvme-fc: remove freeze/unfreeze around update_nr_hw_queues
  nvme-fc: avoid race between time out and tear down
  nvme-fc: update hardware queues before using them

2 years agoMerge tag 'for-linus-5.15b-rc3-tag' of git://git.kernel.org/pub/scm/linux/kernel...
Linus Torvalds [Sat, 25 Sep 2021 22:37:31 +0000 (15:37 -0700)]
Merge tag 'for-linus-5.15b-rc3-tag' of git://git./linux/kernel/git/xen/tip

Pull xen fixes from Juergen Gross:
 "Some minor cleanups and fixes of some theoretical bugs, as well as a
  fix of a bug introduced in 5.15-rc1"

* tag 'for-linus-5.15b-rc3-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip:
  xen/x86: fix PV trap handling on secondary processors
  xen/balloon: fix balloon kthread freezing
  swiotlb-xen: this is PV-only on x86
  xen/pci-swiotlb: reduce visibility of symbols
  PCI: only build xen-pcifront in PV-enabled environments
  swiotlb-xen: ensure to issue well-formed XENMEM_exchange requests
  Xen/gntdev: don't ignore kernel unmapping error
  xen/x86: drop redundant zeroing from cpu_initialize_context()