platform/kernel/linux-starfive.git
3 years agoarm64: uprobe: Return EOPNOTSUPP for AARCH32 instruction probing
He Zhe [Tue, 23 Feb 2021 08:25:34 +0000 (16:25 +0800)]
arm64: uprobe: Return EOPNOTSUPP for AARCH32 instruction probing

As stated in linux/errno.h, ENOTSUPP should never be seen by user programs.
When we set up uprobe with 32-bit perf and arm64 kernel, we would see the
following vague error without useful hint.

The sys_perf_event_open() syscall returned with 524 (INTERNAL ERROR:
strerror_r(524, [buf], 128)=22)

Use EOPNOTSUPP instead to indicate such cases.

Signed-off-by: He Zhe <zhe.he@windriver.com>
Link: https://lore.kernel.org/r/20210223082535.48730-1-zhe.he@windriver.com
Cc: <stable@vger.kernel.org>
Signed-off-by: Will Deacon <will@kernel.org>
3 years agokexec: move machine_kexec_post_load() to public interface
Pavel Tatashin [Fri, 19 Feb 2021 19:51:42 +0000 (14:51 -0500)]
kexec: move machine_kexec_post_load() to public interface

The kernel test robot reports the following compiler warning:

  | arch/arm64/kernel/machine_kexec.c:62:5: warning: no previous prototype for
  | function 'machine_kexec_post_load' [-Wmissing-prototypes]
  |    int machine_kexec_post_load(struct kimage *kimage)

Fix it by moving the declaration of machine_kexec_post_load() from
kexec_internal.h to the public header instead.

Reported-by: kernel test robot <lkp@intel.com>
Link: https://lore.kernel.org/linux-arm-kernel/202102030727.gqTokACH-lkp@intel.com
Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com>
Link: https://lore.kernel.org/r/20210219195142.13571-1-pasha.tatashin@soleen.com
Fixes: 4c3c31230c91 ("arm64: kexec: move relocation function setup")
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64 module: set plt* section addresses to 0x0
Shaoying Xu [Tue, 16 Feb 2021 18:32:34 +0000 (18:32 +0000)]
arm64 module: set plt* section addresses to 0x0

These plt* and .text.ftrace_trampoline sections specified for arm64 have
non-zero addressses. Non-zero section addresses in a relocatable ELF would
confuse GDB when it tries to compute the section offsets and it ends up
printing wrong symbol addresses. Therefore, set them to zero, which mirrors
the change in commit 5d8591bc0fba ("module: set ksymtab/kcrctab* section
addresses to 0x0").

Reported-by: Frank van der Linden <fllinden@amazon.com>
Signed-off-by: Shaoying Xu <shaoyi@amazon.com>
Cc: <stable@vger.kernel.org>
Link: https://lore.kernel.org/r/20210216183234.GA23876@amazon.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: kexec_file: fix memory leakage in create_dtb() when fdt_open_into() fails
qiuguorui1 [Thu, 18 Feb 2021 12:59:00 +0000 (20:59 +0800)]
arm64: kexec_file: fix memory leakage in create_dtb() when fdt_open_into() fails

in function create_dtb(), if fdt_open_into() fails, we need to vfree
buf before return.

Fixes: 52b2a8af7436 ("arm64: kexec_file: load initrd and device-tree")
Cc: stable@vger.kernel.org # v5.0
Signed-off-by: qiuguorui1 <qiuguorui1@huawei.com>
Link: https://lore.kernel.org/r/20210218125900.6810-1-qiuguorui1@huawei.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: spectre: Prevent lockdep splat on v4 mitigation enable path
Will Deacon [Thu, 18 Feb 2021 14:03:46 +0000 (14:03 +0000)]
arm64: spectre: Prevent lockdep splat on v4 mitigation enable path

The Spectre-v4 workaround is re-configured when resuming from suspend,
as the firmware may have re-enabled the mitigation despite the user
previously asking for it to be disabled.

Enabling or disabling the workaround can result in an undefined
instruction exception on CPUs which implement PSTATE.SSBS but only allow
it to be configured by adjusting the SPSR on exception return. We handle
this by installing an 'undef hook' which effectively emulates the access.

Installing this hook requires us to take a couple of spinlocks both to
avoid corrupting the internal list of hooks but also to ensure that we
don't run into an unhandled exception. Unfortunately, when resuming from
suspend, we haven't yet called rcu_idle_exit() and so lockdep gets angry
about "suspicious RCU usage". In doing so, it tries to print a warning,
which leads it to get even more suspicious, this time about itself:

 |  rcu_scheduler_active = 2, debug_locks = 1
 |  RCU used illegally from extended quiescent state!
 |  1 lock held by swapper/0:
 |   #0: (logbuf_lock){-.-.}-{2:2}, at: vprintk_emit+0x88/0x198
 |
 |  Call trace:
 |   dump_backtrace+0x0/0x1d8
 |   show_stack+0x18/0x24
 |   dump_stack+0xe0/0x17c
 |   lockdep_rcu_suspicious+0x11c/0x134
 |   trace_lock_release+0xa0/0x160
 |   lock_release+0x3c/0x290
 |   _raw_spin_unlock+0x44/0x80
 |   vprintk_emit+0xbc/0x198
 |   vprintk_default+0x44/0x6c
 |   vprintk_func+0x1f4/0x1fc
 |   printk+0x54/0x7c
 |   lockdep_rcu_suspicious+0x30/0x134
 |   trace_lock_acquire+0xa0/0x188
 |   lock_acquire+0x50/0x2fc
 |   _raw_spin_lock+0x68/0x80
 |   spectre_v4_enable_mitigation+0xa8/0x30c
 |   __cpu_suspend_exit+0xd4/0x1a8
 |   cpu_suspend+0xa0/0x104
 |   psci_cpu_suspend_enter+0x3c/0x5c
 |   psci_enter_idle_state+0x44/0x74
 |   cpuidle_enter_state+0x148/0x2f8
 |   cpuidle_enter+0x38/0x50
 |   do_idle+0x1f0/0x2b4

Prevent these splats by running __cpu_suspend_exit() with RCU watching.

Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Saravana Kannan <saravanak@google.com>
Suggested-by: "Paul E . McKenney" <paulmck@kernel.org>
Reported-by: Sami Tolvanen <samitolvanen@google.com>
Fixes: c28762070ca6 ("arm64: Rewrite Spectre-v4 mitigation code")
Cc: <stable@vger.kernel.org>
Acked-by: Paul E. McKenney <paulmck@kernel.org>
Acked-by: Marc Zyngier <maz@kernel.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20210218140346.5224-1-will@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoMerge branch 'for-next/vdso' into for-next/core
Will Deacon [Fri, 12 Feb 2021 15:17:42 +0000 (15:17 +0000)]
Merge branch 'for-next/vdso' into for-next/core

vDSO build improvements.

* for-next/vdso:
  arm64: Support running gen_vdso_offsets.sh with BSD userland.
  arm64: do not descend to vdso directories twice

3 years agoMerge branch 'for-next/topology' into for-next/core
Will Deacon [Fri, 12 Feb 2021 15:15:53 +0000 (15:15 +0000)]
Merge branch 'for-next/topology' into for-next/core

Cleanup to the AMU support code and initialisation rework to support
cpufreq drivers built as modules.

* for-next/topology:
  arm64: topology: Make AMUs work with modular cpufreq drivers
  arm64: topology: Reorder init_amu_fie() a bit
  arm64: topology: Avoid the have_policy check

3 years agoMerge branch 'for-next/stacktrace' into for-next/core
Will Deacon [Fri, 12 Feb 2021 15:14:22 +0000 (15:14 +0000)]
Merge branch 'for-next/stacktrace' into for-next/core

Remove synthetic frame record from exception stack when entering from
userspace.

* for-next/stacktrace:
  arm64: remove EL0 exception frame record

3 years agoMerge branch 'for-next/selftests' into for-next/core
Will Deacon [Fri, 12 Feb 2021 15:13:57 +0000 (15:13 +0000)]
Merge branch 'for-next/selftests' into for-next/core

Trivial cleanup to one of the MTE selftests.

* for-next/selftests:
  arm64: mte: style: Simplify bool comparison

3 years agoMerge branch 'for-next/rng' into for-next/core
Will Deacon [Fri, 12 Feb 2021 15:13:14 +0000 (15:13 +0000)]
Merge branch 'for-next/rng' into for-next/core

Add support for the TRNG firmware call introduced by Arm spec DEN0098.

* for-next/rng:
  arm64: Add support for SMCCC TRNG entropy source
  firmware: smccc: Introduce SMCCC TRNG framework
  firmware: smccc: Add SMCCC TRNG function call IDs

3 years agoMerge branch 'for-next/random' into for-next/core
Will Deacon [Fri, 12 Feb 2021 15:11:11 +0000 (15:11 +0000)]
Merge branch 'for-next/random' into for-next/core

Avoid calling arch_get_random_seed_long() from add_interrupt_randomness()
as this can result in a firmware call on some arm64 systems.

* for-next/random:
  random: avoid arch_get_random_seed_long() when collecting IRQ randomness

3 years agoMerge branch 'for-next/perf' into for-next/core
Will Deacon [Fri, 12 Feb 2021 15:09:34 +0000 (15:09 +0000)]
Merge branch 'for-next/perf' into for-next/core

Perf and PMU updates including support for Cortex-A78 and the v8.3 SPE
extensions.

* for-next/perf:
  drivers/perf: Replace spin_lock_irqsave to spin_lock
  dt-bindings: arm: add Cortex-A78 binding
  arm64: perf: add support for Cortex-A78
  arm64: perf: Constify static attribute_group structs
  drivers/perf: Prevent forced unbinding of ARM_DMC620_PMU drivers
  perf/arm-cmn: Move IRQs when migrating context
  perf/arm-cmn: Fix PMU instance naming
  perf: Constify static struct attribute_group
  perf: hisi: Constify static struct attribute_group
  perf/imx_ddr: Constify static struct attribute_group
  perf: qcom: Constify static struct attribute_group
  drivers/perf: Add support for ARMv8.3-SPE

3 years agoMerge branch 'for-next/misc' into for-next/core
Will Deacon [Fri, 12 Feb 2021 15:07:34 +0000 (15:07 +0000)]
Merge branch 'for-next/misc' into for-next/core

Miscellaneous arm64 changes for 5.12.

* for-next/misc:
  arm64: Make CPU_BIG_ENDIAN depend on ld.bfd or ld.lld 13.0.0+
  arm64: vmlinux.ld.S: add assertion for tramp_pg_dir offset
  arm64: vmlinux.ld.S: add assertion for reserved_pg_dir offset
  arm64/ptdump:display the Linear Mapping start marker
  arm64: ptrace: Fix missing return in hw breakpoint code
  KVM: arm64: Move __hyp_set_vectors out of .hyp.text
  arm64: Include linux/io.h in mm/mmap.c
  arm64: cacheflush: Remove stale comment
  arm64: mm: Remove unused header file
  arm64/sparsemem: reduce SECTION_SIZE_BITS
  arm64/mm: Add warning for outside range requests in vmemmap_populate()
  arm64: Drop workaround for broken 'S' constraint with GCC 4.9

3 years agoMerge branch 'for-next/kexec' into for-next/core
Will Deacon [Fri, 12 Feb 2021 15:03:53 +0000 (15:03 +0000)]
Merge branch 'for-next/kexec' into for-next/core

Significant steps along the road to leaving the MMU enabled during kexec
relocation.

* for-next/kexec:
  arm64: hibernate: add __force attribute to gfp_t casting
  arm64: kexec: arm64_relocate_new_kernel don't use x0 as temp
  arm64: kexec: arm64_relocate_new_kernel clean-ups and optimizations
  arm64: kexec: call kexec_image_info only once
  arm64: kexec: move relocation function setup
  arm64: trans_pgd: hibernate: idmap the single page that holds the copy page routines
  arm64: mm: Always update TCR_EL1 from __cpu_set_tcr_t0sz()
  arm64: trans_pgd: pass NULL instead of init_mm to *_populate functions
  arm64: trans_pgd: pass allocator trans_pgd_create_copy
  arm64: trans_pgd: make trans_pgd_map_page generic
  arm64: hibernate: move page handling function to new trans_pgd.c
  arm64: hibernate: variable pudp is used instead of pd4dp
  arm64: kexec: make dtb_mem always enabled

3 years agoMerge branch 'for-next/faultaround' into for-next/core
Will Deacon [Fri, 12 Feb 2021 14:59:10 +0000 (14:59 +0000)]
Merge branch 'for-next/faultaround' into for-next/core

Initialise prefaulted PTEs as 'old' for arm64 when hardware access-flag
updates are supported, which drastically improves vmscan performance.

* for-next/faultaround:
  mm: filemap: Fix microblaze build failure with 'mmu_defconfig'
  mm/nommu: Fix return type of filemap_map_pages()
  mm: Mark anonymous struct field of 'struct vm_fault' as 'const'
  mm: Use static initialisers for immutable fields of 'struct vm_fault'
  mm: Avoid modifying vmf.address in __collapse_huge_page_swapin()
  mm: Pass 'address' to map to do_set_pte() and drop FAULT_FLAG_PREFAULT
  mm: Move immutable fields of 'struct vm_fault' into anonymous struct
  arm64: mm: Implement arch_wants_old_prefaulted_pte()
  mm: Allow architectures to request 'old' entries when prefaulting
  mm: Cleanup faultaround and finish_fault() codepaths

3 years agoMerge branch 'for-next/errata' into for-next/core
Will Deacon [Fri, 12 Feb 2021 14:57:13 +0000 (14:57 +0000)]
Merge branch 'for-next/errata' into for-next/core

Rework of the workaround for Cortex-A76 erratum 1463225 to fit in better
with the ongoing exception entry cleanups and changes to the detection
code for Cortex-A55 erratum 1024718 since it applies to all revisions of
the silicon.

* for-next/errata:
  arm64: entry: consolidate Cortex-A76 erratum 1463225 workaround
  arm64: Extend workaround for erratum 1024718 to all versions of Cortex-A55

3 years agoMerge branch 'for-next/crypto' into for-next/core
Will Deacon [Fri, 12 Feb 2021 14:54:55 +0000 (14:54 +0000)]
Merge branch 'for-next/crypto' into for-next/core

Introduce a new macro to allow yielding the vector unit if preemption
is required. The initial users of this are being merged via the crypto
tree for 5.12.

* for-next/crypto:
  arm64: assembler: add cond_yield macro

3 years agoMerge branch 'for-next/cpufeature' into for-next/core
Will Deacon [Fri, 12 Feb 2021 14:53:19 +0000 (14:53 +0000)]
Merge branch 'for-next/cpufeature' into for-next/core

Support for overriding CPU ID register fields on the command-line, which
allows us to disable certain features which the kernel would otherwise
use unconditionally when detected.

* for-next/cpufeature: (22 commits)
  arm64: cpufeatures: Allow disabling of Pointer Auth from the command-line
  arm64: Defer enabling pointer authentication on boot core
  arm64: cpufeatures: Allow disabling of BTI from the command-line
  arm64: Move "nokaslr" over to the early cpufeature infrastructure
  KVM: arm64: Document HVC_VHE_RESTART stub hypercall
  arm64: Make kvm-arm.mode={nvhe, protected} an alias of id_aa64mmfr1.vh=0
  arm64: Add an aliasing facility for the idreg override
  arm64: Honor VHE being disabled from the command-line
  arm64: Allow ID_AA64MMFR1_EL1.VH to be overridden from the command line
  arm64: cpufeature: Add an early command-line cpufeature override facility
  arm64: Extract early FDT mapping from kaslr_early_init()
  arm64: cpufeature: Use IDreg override in __read_sysreg_by_encoding()
  arm64: cpufeature: Add global feature override facility
  arm64: Move SCTLR_EL1 initialisation to EL-agnostic code
  arm64: Simplify init_el2_state to be non-VHE only
  arm64: Move VHE-specific SPE setup to mutate_to_vhe()
  arm64: Drop early setting of MDSCR_EL2.TPMS
  arm64: Initialise as nVHE before switching to VHE
  arm64: Provide an 'upgrade to VHE' stub hypercall
  arm64: Turn the MMU-on sequence into a macro
  ...

3 years agoMerge branch 'for-next/cosmetic' into for-next/core
Will Deacon [Fri, 12 Feb 2021 14:46:16 +0000 (14:46 +0000)]
Merge branch 'for-next/cosmetic' into for-next/core

Cosmetic changes to tidy up stale comments and fix inconsistent
whitespace. No functional changes here!

* for-next/cosmetic:
  mm/arm64: Correct obsolete comment in do_page_fault()
  arm64: improve whitespace

3 years agodrivers/perf: Replace spin_lock_irqsave to spin_lock
Qi Liu [Tue, 9 Feb 2021 09:42:22 +0000 (17:42 +0800)]
drivers/perf: Replace spin_lock_irqsave to spin_lock

There is no need to do spin_lock_irqsave in context of hard IRQ, so
replace them with spin_lock.

Signed-off-by: Qi Liu <liuqi115@huawei.com>
Link: https://lore.kernel.org/r/1612863742-1551-1-git-send-email-liuqi115@huawei.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agomm: filemap: Fix microblaze build failure with 'mmu_defconfig'
Will Deacon [Wed, 10 Feb 2021 11:15:11 +0000 (11:15 +0000)]
mm: filemap: Fix microblaze build failure with 'mmu_defconfig'

Commit f9ce0be71d1f ("mm: Cleanup faultaround and finish_fault()
codepaths") added a call to 'update_mmu_cache()' in mm/filemap.c, which
breaks the build for microblaze:

  | mm/filemap.c: In function 'filemap_map_pages':
  | mm/filemap.c:3153:3: error: implicit declaration of function 'update_mmu_cache'; did you mean 'update_mmu_tlb'?

Include asm/tlbflush.h in mm/filemap.c to make sure that the function
(or indeed, macro) is available.

Reported-by: Guenter Roeck <linux@roeck-us.net>
Tested-by: Guenter Roeck <linux@roeck-us.net>
Link: https://lore.kernel.org/r/20210209202449.GA104837@roeck-us.net
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: Make CPU_BIG_ENDIAN depend on ld.bfd or ld.lld 13.0.0+
Nathan Chancellor [Tue, 9 Feb 2021 00:57:20 +0000 (17:57 -0700)]
arm64: Make CPU_BIG_ENDIAN depend on ld.bfd or ld.lld 13.0.0+

Similar to commit 28187dc8ebd9 ("ARM: 9025/1: Kconfig: CPU_BIG_ENDIAN
depends on !LD_IS_LLD"), ld.lld prior to 13.0.0 does not properly
support aarch64 big endian, leading to the following build error when
CONFIG_CPU_BIG_ENDIAN is selected:

ld.lld: error: unknown emulation: aarch64linuxb

This has been resolved in LLVM 13. To avoid errors like this, only allow
CONFIG_CPU_BIG_ENDIAN to be selected if using ld.bfd or ld.lld 13.0.0
and newer.

While we are here, the indentation of this symbol used spaces since its
introduction in commit a872013d6d03 ("arm64: kconfig: allow
CPU_BIG_ENDIAN to be selected"). Change it to tabs to be consistent with
kernel coding style.

Link: https://github.com/ClangBuiltLinux/linux/issues/380
Link: https://github.com/ClangBuiltLinux/linux/issues/1288
Link: https://github.com/llvm/llvm-project/commit/7605a9a009b5fa3bdac07e3131c8d82f6d08feb7
Link: https://github.com/llvm/llvm-project/commit/eea34aae2e74e9b6fbdd5b95f479bc7f397bf387
Reported-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Nathan Chancellor <nathan@kernel.org>
Reviewed-by: Nick Desaulniers <ndesaulniers@google.com>
Link: https://lore.kernel.org/r/20210209005719.803608-1-nathan@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: cpufeatures: Allow disabling of Pointer Auth from the command-line
Marc Zyngier [Mon, 8 Feb 2021 09:57:31 +0000 (09:57 +0000)]
arm64: cpufeatures: Allow disabling of Pointer Auth from the command-line

In order to be able to disable Pointer Authentication  at runtime,
whether it is for testing purposes, or to work around HW issues,
let's add support for overriding the ID_AA64ISAR1_EL1.{GPI,GPA,API,APA}
fields.

This is further mapped on the arm64.nopauth command-line alias.

Signed-off-by: Marc Zyngier <maz@kernel.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: David Brazdil <dbrazdil@google.com>
Tested-by: Srinivas Ramana <sramana@codeaurora.org>
Link: https://lore.kernel.org/r/20210208095732.3267263-23-maz@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: Defer enabling pointer authentication on boot core
Srinivas Ramana [Mon, 8 Feb 2021 09:57:30 +0000 (09:57 +0000)]
arm64: Defer enabling pointer authentication on boot core

Defer enabling pointer authentication on boot core until
after its required to be enabled by cpufeature framework.
This will help in controlling the feature dynamically
with a boot parameter.

Signed-off-by: Ajay Patil <pajay@qti.qualcomm.com>
Signed-off-by: Prasad Sodagudi <psodagud@codeaurora.org>
Signed-off-by: Srinivas Ramana <sramana@codeaurora.org>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/1610152163-16554-2-git-send-email-sramana@codeaurora.org
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: David Brazdil <dbrazdil@google.com>
Link: https://lore.kernel.org/r/20210208095732.3267263-22-maz@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: cpufeatures: Allow disabling of BTI from the command-line
Marc Zyngier [Mon, 8 Feb 2021 09:57:29 +0000 (09:57 +0000)]
arm64: cpufeatures: Allow disabling of BTI from the command-line

In order to be able to disable BTI at runtime, whether it is
for testing purposes, or to work around HW issues, let's add
support for overriding the ID_AA64PFR1_EL1.BTI field.

This is further mapped on the arm64.nobti command-line alias.

Signed-off-by: Marc Zyngier <maz@kernel.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: David Brazdil <dbrazdil@google.com>
Tested-by: Srinivas Ramana <sramana@codeaurora.org>
Link: https://lore.kernel.org/r/20210208095732.3267263-21-maz@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: Move "nokaslr" over to the early cpufeature infrastructure
Marc Zyngier [Mon, 8 Feb 2021 09:57:28 +0000 (09:57 +0000)]
arm64: Move "nokaslr" over to the early cpufeature infrastructure

Given that the early cpufeature infrastructure has borrowed quite
a lot of code from the kaslr implementation, let's reimplement
the matching of the "nokaslr" option with it.

Signed-off-by: Marc Zyngier <maz@kernel.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: David Brazdil <dbrazdil@google.com>
Link: https://lore.kernel.org/r/20210208095732.3267263-20-maz@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoKVM: arm64: Document HVC_VHE_RESTART stub hypercall
Marc Zyngier [Mon, 8 Feb 2021 09:57:27 +0000 (09:57 +0000)]
KVM: arm64: Document HVC_VHE_RESTART stub hypercall

For completeness, let's document the HVC_VHE_RESTART stub.

Signed-off-by: Marc Zyngier <maz@kernel.org>
Acked-by: David Brazdil <dbrazdil@google.com>
Link: https://lore.kernel.org/r/20210208095732.3267263-19-maz@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: Make kvm-arm.mode={nvhe, protected} an alias of id_aa64mmfr1.vh=0
Marc Zyngier [Mon, 8 Feb 2021 09:57:26 +0000 (09:57 +0000)]
arm64: Make kvm-arm.mode={nvhe, protected} an alias of id_aa64mmfr1.vh=0

Admitedly, passing id_aa64mmfr1.vh=0 on the command-line isn't
that easy to understand, and it is likely that users would much
prefer write "kvm-arm.mode=nvhe", or "...=protected".

So here you go. This has the added advantage that we can now
always honor the "kvm-arm.mode=protected" option, even when
booting on a VHE system.

Signed-off-by: Marc Zyngier <maz@kernel.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: David Brazdil <dbrazdil@google.com>
Link: https://lore.kernel.org/r/20210208095732.3267263-18-maz@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: Add an aliasing facility for the idreg override
Marc Zyngier [Mon, 8 Feb 2021 09:57:25 +0000 (09:57 +0000)]
arm64: Add an aliasing facility for the idreg override

In order to map the override of idregs to options that a user
can easily understand, let's introduce yet another option
array, which maps an option to the corresponding idreg options.

Signed-off-by: Marc Zyngier <maz@kernel.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: David Brazdil <dbrazdil@google.com>
Link: https://lore.kernel.org/r/20210208095732.3267263-17-maz@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: Honor VHE being disabled from the command-line
Marc Zyngier [Mon, 8 Feb 2021 09:57:24 +0000 (09:57 +0000)]
arm64: Honor VHE being disabled from the command-line

Finally we can check whether VHE is disabled on the command line,
and not enable it if that's the user's wish.

Signed-off-by: Marc Zyngier <maz@kernel.org>
Acked-by: David Brazdil <dbrazdil@google.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20210208095732.3267263-16-maz@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: Allow ID_AA64MMFR1_EL1.VH to be overridden from the command line
Marc Zyngier [Mon, 8 Feb 2021 09:57:23 +0000 (09:57 +0000)]
arm64: Allow ID_AA64MMFR1_EL1.VH to be overridden from the command line

As we want to be able to disable VHE at runtime, let's match
"id_aa64mmfr1.vh=" from the command line as an override.
This doesn't have much effect yet as our boot code doesn't look
at the cpufeature, but only at the HW registers.

Signed-off-by: Marc Zyngier <maz@kernel.org>
Acked-by: David Brazdil <dbrazdil@google.com>
Acked-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20210208095732.3267263-15-maz@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: cpufeature: Add an early command-line cpufeature override facility
Marc Zyngier [Mon, 8 Feb 2021 09:57:22 +0000 (09:57 +0000)]
arm64: cpufeature: Add an early command-line cpufeature override facility

In order to be able to override CPU features at boot time,
let's add a command line parser that matches options of the
form "cpureg.feature=value", and store the corresponding
value into the override val/mask pair.

No features are currently defined, so no expected change in
functionality.

Signed-off-by: Marc Zyngier <maz@kernel.org>
Acked-by: David Brazdil <dbrazdil@google.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20210208095732.3267263-14-maz@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: Extract early FDT mapping from kaslr_early_init()
Marc Zyngier [Mon, 8 Feb 2021 09:57:21 +0000 (09:57 +0000)]
arm64: Extract early FDT mapping from kaslr_early_init()

As we want to parse more options very early in the kernel lifetime,
let's always map the FDT early. This is achieved by moving that
code out of kaslr_early_init().

No functional change expected.

Signed-off-by: Marc Zyngier <maz@kernel.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: David Brazdil <dbrazdil@google.com>
Link: https://lore.kernel.org/r/20210208095732.3267263-13-maz@kernel.org
[will: Ensue KASAN is enabled before running C code]
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: cpufeature: Use IDreg override in __read_sysreg_by_encoding()
Marc Zyngier [Mon, 8 Feb 2021 09:57:20 +0000 (09:57 +0000)]
arm64: cpufeature: Use IDreg override in __read_sysreg_by_encoding()

__read_sysreg_by_encoding() is used by a bunch of cpufeature helpers,
which should take the feature override into account. Let's do that.

For a good measure (and because we are likely to need to further
down the line), make this helper available to the rest of the
non-modular kernel.

Code that needs to know the *real* features of a CPU can still
use read_sysreg_s(), and find the bare, ugly truth.

Signed-off-by: Marc Zyngier <maz@kernel.org>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Acked-by: David Brazdil <dbrazdil@google.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20210208095732.3267263-12-maz@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: cpufeature: Add global feature override facility
Marc Zyngier [Mon, 8 Feb 2021 09:57:19 +0000 (09:57 +0000)]
arm64: cpufeature: Add global feature override facility

Add a facility to globally override a feature, no matter what
the HW says. Yes, this sounds dangerous, but we do respect the
"safe" value for a given feature. This doesn't mean the user
doesn't need to know what they are doing.

Nothing uses this yet, so we are pretty safe. For now.

Signed-off-by: Marc Zyngier <maz@kernel.org>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Acked-by: David Brazdil <dbrazdil@google.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20210208095732.3267263-11-maz@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: Move SCTLR_EL1 initialisation to EL-agnostic code
Marc Zyngier [Mon, 8 Feb 2021 09:57:18 +0000 (09:57 +0000)]
arm64: Move SCTLR_EL1 initialisation to EL-agnostic code

We can now move the initial SCTLR_EL1 setup to be used for both
EL1 and EL2 setup.

Signed-off-by: Marc Zyngier <maz@kernel.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: David Brazdil <dbrazdil@google.com>
Link: https://lore.kernel.org/r/20210208095732.3267263-10-maz@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: Simplify init_el2_state to be non-VHE only
Marc Zyngier [Mon, 8 Feb 2021 09:57:17 +0000 (09:57 +0000)]
arm64: Simplify init_el2_state to be non-VHE only

As init_el2_state is now nVHE only, let's simplify it and drop
the VHE setup.

Signed-off-by: Marc Zyngier <maz@kernel.org>
Acked-by: David Brazdil <dbrazdil@google.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20210208095732.3267263-9-maz@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: Move VHE-specific SPE setup to mutate_to_vhe()
Marc Zyngier [Mon, 8 Feb 2021 09:57:16 +0000 (09:57 +0000)]
arm64: Move VHE-specific SPE setup to mutate_to_vhe()

There isn't much that a VHE kernel needs on top of whatever has
been done for nVHE, so let's move the little we need to the
VHE stub (the SPE setup), and drop the init_el2_state macro.

No expected functional change.

Signed-off-by: Marc Zyngier <maz@kernel.org>
Acked-by: David Brazdil <dbrazdil@google.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20210208095732.3267263-8-maz@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: Drop early setting of MDSCR_EL2.TPMS
Marc Zyngier [Mon, 8 Feb 2021 09:57:15 +0000 (09:57 +0000)]
arm64: Drop early setting of MDSCR_EL2.TPMS

When running VHE, we set MDSCR_EL2.TPMS very early on to force
the trapping of EL1 SPE accesses to EL2.

However:
- we are running with HCR_EL2.{E2H,TGE}={1,1}, meaning that there
  is no EL1 to trap from

- before entering a guest, we call kvm_arm_setup_debug(), which
  sets MDCR_EL2_TPMS in the per-vcpu shadow mdscr_el2, which gets
  applied on entry by __activate_traps_common().

The early setting of MDSCR_EL2.TPMS is therefore useless and can
be dropped.

Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20210208095732.3267263-7-maz@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: Initialise as nVHE before switching to VHE
Marc Zyngier [Mon, 8 Feb 2021 09:57:14 +0000 (09:57 +0000)]
arm64: Initialise as nVHE before switching to VHE

As we are aiming to be able to control whether we enable VHE or
not, let's always drop down to EL1 first, and only then upgrade
to VHE if at all possible.

This means that if the kernel is booted at EL2, we always start
with a nVHE init, drop to EL1 to initialise the the kernel, and
only then upgrade the kernel EL to EL2 if possible (the process
is obviously shortened for secondary CPUs).

The resume path is handled similarly to a secondary CPU boot.

Signed-off-by: Marc Zyngier <maz@kernel.org>
Acked-by: David Brazdil <dbrazdil@google.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20210208095732.3267263-6-maz@kernel.org
[will: Avoid calling switch_to_vhe twice on kaslr path]
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: entry: consolidate Cortex-A76 erratum 1463225 workaround
Mark Rutland [Tue, 2 Feb 2021 12:03:41 +0000 (12:03 +0000)]
arm64: entry: consolidate Cortex-A76 erratum 1463225 workaround

The workaround for Cortex-A76 erratum 1463225 is split across the
syscall and debug handlers in separate files. This structure currently
forces us to do some redundant work for debug exceptions from EL0, is a
little difficult to follow, and gets in the way of some future rework of
the exception entry code as it requires exceptions to be unmasked late
in the syscall handling path.

To simplify things, and as a preparatory step for future rework of
exception entry, this patch moves all the workaround logic into
entry-common.c. As the debug handler only needs to run for EL1 debug
exceptions, we no longer call it for EL0 debug exceptions, and no longer
need to check user_mode(regs) as this is always false. For clarity
cortex_a76_erratum_1463225_debug_handler() is changed to return bool.

In the SVC path, the workaround is applied earlier, but this should have
no functional impact as exceptions are still masked. In the debug path
we run the fixup before explicitly disabling preemption, but we will not
attempt to preempt before returning from the exception.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20210202120341.28858-1-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: Provide an 'upgrade to VHE' stub hypercall
Marc Zyngier [Mon, 8 Feb 2021 09:57:13 +0000 (09:57 +0000)]
arm64: Provide an 'upgrade to VHE' stub hypercall

As we are about to change the way a VHE system boots, let's
provide the core helper, in the form of a stub hypercall that
enables VHE and replicates the full EL1 context at EL2, thanks
to EL1 and VHE-EL2 being extremely similar.

On exception return, the kernel carries on at EL2. Fancy!

Nothing calls this new hypercall yet, so no functional change.

Signed-off-by: Marc Zyngier <maz@kernel.org>
Acked-by: David Brazdil <dbrazdil@google.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20210208095732.3267263-5-maz@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: Turn the MMU-on sequence into a macro
Marc Zyngier [Mon, 8 Feb 2021 09:57:12 +0000 (09:57 +0000)]
arm64: Turn the MMU-on sequence into a macro

Turning the MMU on is a popular sport in the arm64 kernel, and
we do it more than once, or even twice. As we are about to add
even more, let's turn it into a macro.

No expected functional change.

Signed-off-by: Marc Zyngier <maz@kernel.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: David Brazdil <dbrazdil@google.com>
Link: https://lore.kernel.org/r/20210208095732.3267263-4-maz@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: Fix outdated TCR setup comment
Marc Zyngier [Mon, 8 Feb 2021 09:57:11 +0000 (09:57 +0000)]
arm64: Fix outdated TCR setup comment

The arm64 kernel has long be able to use more than 39bit VAs.
Since day one, actually. Let's rewrite the offending comment.

Signed-off-by: Marc Zyngier <maz@kernel.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: David Brazdil <dbrazdil@google.com>
Link: https://lore.kernel.org/r/20210208095732.3267263-3-maz@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: Fix labels in el2_setup macros
Marc Zyngier [Mon, 8 Feb 2021 09:57:10 +0000 (09:57 +0000)]
arm64: Fix labels in el2_setup macros

If someone happens to write the following code:

b 1f
init_el2_state vhe
1:
[...]

they will be in for a long debugging session, as the label "1f"
will be resolved *inside* the init_el2_state macro instead of
after it. Not really what one expects.

Instead, rewite the EL2 setup macros to use unambiguous labels,
thanks to the usual macro counter trick.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Acked-by: David Brazdil <dbrazdil@google.com>
Link: https://lore.kernel.org/r/20210208095732.3267263-2-maz@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: Extend workaround for erratum 1024718 to all versions of Cortex-A55
Suzuki K Poulose [Wed, 3 Feb 2021 23:00:57 +0000 (23:00 +0000)]
arm64: Extend workaround for erratum 1024718 to all versions of Cortex-A55

The erratum 1024718 affects Cortex-A55 r0p0 to r2p0. However
we apply the work around for r0p0 - r1p0. Unfortunately this
won't be fixed for the future revisions for the CPU. Thus
extend the work around for all versions of A55, to cover
for r2p0 and any future revisions.

Cc: stable@vger.kernel.org
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: James Morse <james.morse@arm.com>
Cc: Kunihiko Hayashi <hayashi.kunihiko@socionext.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Link: https://lore.kernel.org/r/20210203230057.3961239-1-suzuki.poulose@arm.com
[will: Update Kconfig help text]
Signed-off-by: Will Deacon <will@kernel.org>
3 years agomm/arm64: Correct obsolete comment in do_page_fault()
Miaohe Lin [Fri, 5 Feb 2021 09:09:19 +0000 (04:09 -0500)]
mm/arm64: Correct obsolete comment in do_page_fault()

commit d8ed45c5dcd4 ("mmap locking API: use coccinelle to convert mmap_sem
rwsem call sites") has convertd down_read_trylock() to mmap_read_trylock().
But it forgot to update the relevant comment.

Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
Link: https://lore.kernel.org/r/20210205090919.63382-1-linmiaohe@huawei.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: improve whitespace
Zhiyuan Dai [Thu, 4 Feb 2021 01:43:49 +0000 (09:43 +0800)]
arm64: improve whitespace

In a few places we don't have whitespace between macro parameters,
which makes them hard to read. This patch adds whitespace to clearly
separate the parameters.

In a few places we have unnecessary whitespace around unary operators,
which is confusing, This patch removes the unnecessary whitespace.

Signed-off-by: Zhiyuan Dai <daizhiyuan@phytium.com.cn>
Link: https://lore.kernel.org/r/1612403029-5011-1-git-send-email-daizhiyuan@phytium.com.cn
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: assembler: add cond_yield macro
Ard Biesheuvel [Wed, 3 Feb 2021 11:36:18 +0000 (12:36 +0100)]
arm64: assembler: add cond_yield macro

Add a macro cond_yield that branches to a specified label when called if
the TIF_NEED_RESCHED flag is set and decreasing the preempt count would
make the task preemptible again, resulting in a schedule to occur. This
can be used by kernel mode SIMD code that keeps a lot of state in SIMD
registers, which would make chunking the input in order to perform the
cond_resched() check from C code disproportionately costly.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20210203113626.220151-2-ardb@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: vmlinux.ld.S: add assertion for tramp_pg_dir offset
Joey Gouly [Tue, 2 Feb 2021 12:36:58 +0000 (12:36 +0000)]
arm64: vmlinux.ld.S: add assertion for tramp_pg_dir offset

Add TRAMP_SWAPPER_OFFSET and use that instead of hardcoding
the offset between swapper_pg_dir and tramp_pg_dir.

Then use TRAMP_SWAPPER_OFFSET to assert that the offset is
correct at link time.

Signed-off-by: Joey Gouly <joey.gouly@arm.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20210202123658.22308-3-joey.gouly@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: vmlinux.ld.S: add assertion for reserved_pg_dir offset
Joey Gouly [Tue, 2 Feb 2021 12:36:57 +0000 (12:36 +0000)]
arm64: vmlinux.ld.S: add assertion for reserved_pg_dir offset

Add RESERVED_SWAPPER_OFFSET and use that instead of hardcoding
the offset between swapper_pg_dir and reserved_pg_dir.

Then use RESERVED_SWAPPER_OFFSET to assert that the offset is
correct at link time.

Signed-off-by: Joey Gouly <joey.gouly@arm.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20210202123658.22308-2-joey.gouly@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agodt-bindings: arm: add Cortex-A78 binding
Seiya Wang [Wed, 3 Feb 2021 05:53:48 +0000 (13:53 +0800)]
dt-bindings: arm: add Cortex-A78 binding

Add compatible for Cortex-A78 PMU

Signed-off-by: Seiya Wang <seiya.wang@mediatek.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20210203055348.4935-3-seiya.wang@mediatek.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: perf: add support for Cortex-A78
Seiya Wang [Wed, 3 Feb 2021 05:53:47 +0000 (13:53 +0800)]
arm64: perf: add support for Cortex-A78

Add support for Cortex-A78 using generic PMUv3 for now.

Signed-off-by: Seiya Wang <seiya.wang@mediatek.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20210203055348.4935-2-seiya.wang@mediatek.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64/ptdump:display the Linear Mapping start marker
Hailong Liu [Tue, 2 Feb 2021 15:07:49 +0000 (23:07 +0800)]
arm64/ptdump:display the Linear Mapping start marker

The current /sys/kernel/debug/kernel_page_tables does not display the
*Linear Mapping start* marker on arm64, which I think should be paired
with the *Linear Mapping end* marker.

Since *Linear Mapping start* is the first marker, use initialise 'level'
to -1 in order to display it.

Signed-off-by: Hailong Liu <liu.hailong6@zte.com.cn>
Link: https://lore.kernel.org/r/20210202150749.10104-1-liuhailongg6@163.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: ptrace: Fix missing return in hw breakpoint code
Keno Fischer [Tue, 2 Feb 2021 00:21:09 +0000 (19:21 -0500)]
arm64: ptrace: Fix missing return in hw breakpoint code

When delivering a hw-breakpoint SIGTRAP to a compat task via ptrace, the
lack of a 'return' statement means we fallthrough to the native case,
which differs in its handling of 'si_errno'.

Although this looks to be harmless because the subsequent signal is
effectively ignored, it's confusing and unintentional, so add the
missing 'return'.

Signed-off-by: Keno Fischer <keno@juliacomputing.com>
Link: https://lore.kernel.org/r/20210202002109.GA624440@juliacomputing.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: perf: Constify static attribute_group structs
Rikard Falkeborn [Sun, 31 Jan 2021 14:36:15 +0000 (15:36 +0100)]
arm64: perf: Constify static attribute_group structs

The only usage of these is to put their addresses in an array of
pointers to const attribute_group structs. Make them const to allow the
compiler to put them in read-only memory.

Signed-off-by: Rikard Falkeborn <rikard.falkeborn@gmail.com>
Signed-off-by: Will Deacon <will@kernel.org>
3 years agodrivers/perf: Prevent forced unbinding of ARM_DMC620_PMU drivers
Qi Liu [Tue, 2 Feb 2021 07:58:06 +0000 (15:58 +0800)]
drivers/perf: Prevent forced unbinding of ARM_DMC620_PMU drivers

Set "suppress_bind_attrs" to true, so that bind/unbind can be
disabled via sysfs and prevent unbinding ARM_DMC620_PMU drivers
during perf sampling.

Signed-off-by: Qi Liu <liuqi115@huawei.com>
Link: https://lore.kernel.org/r/1612252686-50329-1-git-send-email-liuqi115@huawei.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: hibernate: add __force attribute to gfp_t casting
Pavel Tatashin [Mon, 1 Feb 2021 15:03:06 +0000 (10:03 -0500)]
arm64: hibernate: add __force attribute to gfp_t casting

Two new warnings are reported by sparse:

"sparse warnings: (new ones prefixed by >>)"
>> arch/arm64/kernel/hibernate.c:181:39: sparse: sparse: cast to
   restricted gfp_t
>> arch/arm64/kernel/hibernate.c:202:44: sparse: sparse: cast from
   restricted gfp_t

gfp_t has __bitwise type attribute and requires __force added to casting
in order to avoid these warnings.

Fixes: 50f53fb72181 ("arm64: trans_pgd: make trans_pgd_map_page generic")
Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com>
Link: https://lore.kernel.org/r/20210201150306.54099-2-pasha.tatashin@soleen.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoperf/arm-cmn: Move IRQs when migrating context
Robin Murphy [Thu, 28 Jan 2021 13:12:44 +0000 (13:12 +0000)]
perf/arm-cmn: Move IRQs when migrating context

If we migrate the PMU context to another CPU, we need to remember to
retarget the IRQs as well.

Fixes: 0ba64770a2f2 ("perf: Add Arm CMN-600 PMU driver")
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Link: https://lore.kernel.org/r/e080640aea4ed8dfa870b8549dfb31221803eb6b.1611839564.git.robin.murphy@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoperf/arm-cmn: Fix PMU instance naming
Robin Murphy [Thu, 28 Jan 2021 13:12:43 +0000 (13:12 +0000)]
perf/arm-cmn: Fix PMU instance naming

Although it's neat to avoid the suffix for the typical case of a
single PMU, it means systems with multiple CMN instances end up with
inconsistent naming. I think it also breaks perf tool's "uncore alias"
logic if the common instance prefix is also the full name of one.

Avoid any surprises by not trying to be clever and simply numbering
every instance, even when it might technically prove redundant.

Fixes: 0ba64770a2f2 ("perf: Add Arm CMN-600 PMU driver")
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Link: https://lore.kernel.org/r/649a2281233f193d59240b13ed91b57337c77b32.1611839564.git.robin.murphy@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoKVM: arm64: Move __hyp_set_vectors out of .hyp.text
Quentin Perret [Thu, 28 Jan 2021 17:38:50 +0000 (17:38 +0000)]
KVM: arm64: Move __hyp_set_vectors out of .hyp.text

The .hyp.text section is supposed to be reserved for the nVHE EL2 code.
However, there is currently one occurrence of EL1 executing code located
in .hyp.text when calling __hyp_{re}set_vectors(), which happen to sit
next to the EL2 stub vectors. While not a problem yet, such patterns
will cause issues when removing the host kernel from the TCB, so a
cleaner split would be preferable.

Fix this by delimiting the end of the .hyp.text section in hyp-stub.S.

Acked-by: Marc Zyngier <maz@kernel.org>
Signed-off-by: Quentin Perret <qperret@google.com>
Link: https://lore.kernel.org/r/20210128173850.2478161-1-qperret@google.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agomm/nommu: Fix return type of filemap_map_pages()
Geert Uytterhoeven [Thu, 28 Jan 2021 10:06:26 +0000 (11:06 +0100)]
mm/nommu: Fix return type of filemap_map_pages()

If CONFIG_MMU is not set (e.g. m68k/m5272c3_defconfig):

    mm/nommu.c:1671:6: error: conflicting types for â€˜filemap_map_pages’
     1671 | void filemap_map_pages(struct vm_fault *vmf,
  |      ^~~~~~~~~~~~~~~~~
    In file included from mm/nommu.c:20:
    ./include/linux/mm.h:2578:19: note: previous declaration of â€˜filemap_map_pages’ was here
     2578 | extern vm_fault_t filemap_map_pages(struct vm_fault *vmf,
  |                   ^~~~~~~~~~~~~~~~~

The signature of filemap_map_pages() was changed, but the nommu
implementation wasn't updated.

Reported-by: noreply@ellerman.id.au
Fixes: f9ce0be71d1f ("mm: Cleanup faultaround and finish_fault() codepaths")
Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>
Link: https://lore.kernel.org/r/20210128100626.2257638-1-geert@linux-m68k.org
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: kexec: arm64_relocate_new_kernel don't use x0 as temp
Pavel Tatashin [Mon, 25 Jan 2021 19:19:17 +0000 (14:19 -0500)]
arm64: kexec: arm64_relocate_new_kernel don't use x0 as temp

x0 will contain the only argument to arm64_relocate_new_kernel; don't
use it as a temp. Reassigned registers to free-up x0 so we won't need
to copy argument, and can use it at the beginning and at the end of the
function.

Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com>
Reviewed-by: James Morse <james.morse@arm.com>
Link: https://lore.kernel.org/r/20210125191923.1060122-13-pasha.tatashin@soleen.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: kexec: arm64_relocate_new_kernel clean-ups and optimizations
Pavel Tatashin [Mon, 25 Jan 2021 19:19:16 +0000 (14:19 -0500)]
arm64: kexec: arm64_relocate_new_kernel clean-ups and optimizations

In preparation to bigger changes to arm64_relocate_new_kernel that would
enable this function to do MMU backed memory copy, do few clean-ups and
optimizations. These include:

1. Call raw_dcache_line_size()  only when relocation is actually going to
   happen. i.e. kdump type kexec, does not need it.

2.  copy_page(dest, src, tmps...) increments dest and src by PAGE_SIZE, so
    no need to store dest prior to calling copy_page and increment it
    after. Also, src is not used after a copy, not need to copy either.

3. For consistency use comment on the same line with instruction when it
   describes the instruction itself.

4. Some comment corrections

Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com>
Link: https://lore.kernel.org/r/20210125191923.1060122-12-pasha.tatashin@soleen.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: kexec: call kexec_image_info only once
Pavel Tatashin [Mon, 25 Jan 2021 19:19:15 +0000 (14:19 -0500)]
arm64: kexec: call kexec_image_info only once

Currently, kexec_image_info() is called during load time, and
right before kernel is being kexec'ed. There is no need to do both.
So, call it only once when segments are loaded and the physical
location of page with copy of arm64_relocate_new_kernel is known.

Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com>
Acked-by: James Morse <james.morse@arm.com>
Link: https://lore.kernel.org/r/20210125191923.1060122-11-pasha.tatashin@soleen.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: kexec: move relocation function setup
Pavel Tatashin [Mon, 25 Jan 2021 19:19:14 +0000 (14:19 -0500)]
arm64: kexec: move relocation function setup

Currently, kernel relocation function is configured in machine_kexec()
at the time of kexec reboot by using control_code_page.

This operation, however, is more logical to be done during kexec_load,
and thus remove from reboot time. Move, setup of this function to
newly added machine_kexec_post_load().

Because once MMU is enabled, kexec control page will contain more than
relocation kernel, but also vector table, add pointer to the actual
function within this page arch.kern_reloc. Currently, it equals to the
beginning of page, we will add offsets later, when vector table is
added.

Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com>
Reviewed-by: James Morse <james.morse@arm.com>
Link: https://lore.kernel.org/r/20210125191923.1060122-10-pasha.tatashin@soleen.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: trans_pgd: hibernate: idmap the single page that holds the copy page routines
James Morse [Mon, 25 Jan 2021 19:19:13 +0000 (14:19 -0500)]
arm64: trans_pgd: hibernate: idmap the single page that holds the copy page routines

To resume from hibernate, the contents of memory are restored from
the swap image. This may overwrite any page, including the running
kernel and its page tables.

Hibernate copies the code it uses to do the restore into a single
page that it knows won't be overwritten, and maps it with page tables
built from pages that won't be overwritten.

Today the address it uses for this mapping is arbitrary, but to allow
kexec to reuse this code, it needs to be idmapped. To idmap the page
we must avoid the kernel helpers that have VA_BITS baked in.

Convert create_single_mapping() to take a single PA, and idmap it.
The page tables are built in the reverse order to normal using
pfn_pte() to stir in any bits between 52:48. T0SZ is always increased
to cover 48bits, or 52 if the copy code has bits 52:48 in its PA.

Signed-off-by: James Morse <james.morse@arm.com>
[Adopted the original patch from James to trans_pgd interface, so it can be
commonly used by both Kexec and Hibernate. Some minor clean-ups.]

Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com>
Link: https://lore.kernel.org/linux-arm-kernel/20200115143322.214247-4-james.morse@arm.com/
Link: https://lore.kernel.org/r/20210125191923.1060122-9-pasha.tatashin@soleen.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: mm: Always update TCR_EL1 from __cpu_set_tcr_t0sz()
James Morse [Mon, 25 Jan 2021 19:19:12 +0000 (14:19 -0500)]
arm64: mm: Always update TCR_EL1 from __cpu_set_tcr_t0sz()

Because only the idmap sets a non-standard T0SZ, __cpu_set_tcr_t0sz()
can check for platforms that need to do this using
__cpu_uses_extended_idmap() before doing its work.

The idmap is only built with enough levels, (and T0SZ bits) to map
its single page.

To allow hibernate, and then kexec to idmap their single page copy
routines, __cpu_set_tcr_t0sz() needs to consider additional users,
who may need a different number of levels/T0SZ-bits to the idmap.
(i.e. VA_BITS may be enough for the idmap, but not hibernate/kexec)

Always read TCR_EL1, and check whether any work needs doing for
this request. __cpu_uses_extended_idmap() remains as it is used
by KVM, whose idmap is also part of the kernel image.

This mostly affects the cpuidle path, where we now get an extra
system register read .

CC: Lorenzo Pieralisi <Lorenzo.Pieralisi@arm.com>
CC: Sudeep Holla <sudeep.holla@arm.com>
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com>
Link: https://lore.kernel.org/r/20210125191923.1060122-8-pasha.tatashin@soleen.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: trans_pgd: pass NULL instead of init_mm to *_populate functions
Pavel Tatashin [Mon, 25 Jan 2021 19:19:11 +0000 (14:19 -0500)]
arm64: trans_pgd: pass NULL instead of init_mm to *_populate functions

trans_pgd_* should be independent from mm context because the tables that
are created by this code are used when there are no mm context around, as
it is between kernels. Simply replace mm_init's with NULL.

Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com>
Acked-by: James Morse <james.morse@arm.com>
Link: https://lore.kernel.org/r/20210125191923.1060122-7-pasha.tatashin@soleen.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: trans_pgd: pass allocator trans_pgd_create_copy
Pavel Tatashin [Mon, 25 Jan 2021 19:19:10 +0000 (14:19 -0500)]
arm64: trans_pgd: pass allocator trans_pgd_create_copy

Make trans_pgd_create_copy and its subroutines to use allocator that is
passed as an argument

Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com>
Reviewed-by: James Morse <james.morse@arm.com>
Link: https://lore.kernel.org/r/20210125191923.1060122-6-pasha.tatashin@soleen.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: trans_pgd: make trans_pgd_map_page generic
Pavel Tatashin [Mon, 25 Jan 2021 19:19:09 +0000 (14:19 -0500)]
arm64: trans_pgd: make trans_pgd_map_page generic

kexec is going to use a different allocator, so make
trans_pgd_map_page to accept allocator as an argument, and also
kexec is going to use a different map protection, so also pass
it via argument.

Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com>
Reviewed-by: Matthias Brugger <mbrugger@suse.com>
Link: https://lore.kernel.org/r/20210125191923.1060122-5-pasha.tatashin@soleen.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: hibernate: move page handling function to new trans_pgd.c
Pavel Tatashin [Mon, 25 Jan 2021 19:19:08 +0000 (14:19 -0500)]
arm64: hibernate: move page handling function to new trans_pgd.c

Now, that we abstracted the required functions move them to a new home.
Later, we will generalize these function in order to be useful outside
of hibernation.

Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com>
Reviewed-by: James Morse <james.morse@arm.com>
Link: https://lore.kernel.org/r/20210125191923.1060122-4-pasha.tatashin@soleen.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: hibernate: variable pudp is used instead of pd4dp
Pavel Tatashin [Mon, 25 Jan 2021 19:19:07 +0000 (14:19 -0500)]
arm64: hibernate: variable pudp is used instead of pd4dp

There should be p4dp used when p4d page is allocated.
This is not a functional issue, but for the logical correctness this
should be fixed.

Fixes: e9f6376858b9 ("arm64: add support for folded p4d page tables")
Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com>
Link: https://lore.kernel.org/r/20210125191923.1060122-3-pasha.tatashin@soleen.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: kexec: make dtb_mem always enabled
Pavel Tatashin [Mon, 25 Jan 2021 19:19:06 +0000 (14:19 -0500)]
arm64: kexec: make dtb_mem always enabled

Currently, dtb_mem is enabled only when CONFIG_KEXEC_FILE is
enabled. This adds ugly ifdefs to c files.

Always enabled dtb_mem, when it is not used, it is NULL.
Change the dtb_mem to phys_addr_t, as it is a physical address.

Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com>
Reviewed-by: James Morse <james.morse@arm.com>
Link: https://lore.kernel.org/r/20210125191923.1060122-2-pasha.tatashin@soleen.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: Include linux/io.h in mm/mmap.c
Will Deacon [Wed, 27 Jan 2021 12:52:16 +0000 (12:52 +0000)]
arm64: Include linux/io.h in mm/mmap.c

Commit 507d664450f8 ("arm64: mm: Remove unused header file") removed
a bunch of apparently "unused" header inclusions from our mm/mmap.c
implementation, but in doing so introduced the following warning when
building with W=1:

>> arch/arm64/mm/mmap.c:17:5: warning: no previous prototype for 'valid_phys_addr_range' [-Wmissing-prototypes]
      17 | int valid_phys_addr_range(phys_addr_t addr, size_t size)
         |     ^~~~~~~~~~~~~~~~~~~~~
>> arch/arm64/mm/mmap.c:36:5: warning: no previous prototype for 'valid_mmap_phys_addr_range' [-Wmissing-prototypes]
      36 | int valid_mmap_phys_addr_range(unsigned long pfn, size_t size)
         |     ^~~~~~~~~~~~~~~~~~~~~~~~~~

Add back the linux/io.h header inclusion to pull in the missing
prototypes.

Reported-by: kernel test robot <lkp@intel.com>
Link: https://lore.kernel.org/r/202101271438.V9TmBC31-lkp@intel.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: cacheflush: Remove stale comment
Shaokun Zhang [Mon, 25 Jan 2021 11:55:53 +0000 (19:55 +0800)]
arm64: cacheflush: Remove stale comment

Remove stale comment since commit a7ba121215fa ("arm64: use asm-generic/cacheflush.h")

Cc: Christoph Hellwig <hch@lst.de>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Signed-off-by: Shaokun Zhang <zhangshaokun@hisilicon.com>
Link: https://lore.kernel.org/r/1611575753-36435-1-git-send-email-zhangshaokun@hisilicon.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: mm: Remove unused header file
Shaokun Zhang [Tue, 26 Jan 2021 12:24:44 +0000 (20:24 +0800)]
arm64: mm: Remove unused header file

Many header files are never used, let's remove them directly.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Signed-off-by: Shaokun Zhang <zhangshaokun@hisilicon.com>
Link: https://lore.kernel.org/r/1611663884-43329-1-git-send-email-zhangshaokun@hisilicon.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64/sparsemem: reduce SECTION_SIZE_BITS
Sudarshan Rajagopalan [Thu, 21 Jan 2021 05:29:13 +0000 (21:29 -0800)]
arm64/sparsemem: reduce SECTION_SIZE_BITS

memory_block_size_bytes() determines the memory hotplug granularity i.e the
amount of memory which can be hot added or hot removed from the kernel. The
generic value here being MIN_MEMORY_BLOCK_SIZE (1UL << SECTION_SIZE_BITS)
for memory_block_size_bytes() on platforms like arm64 that does not override.

Current SECTION_SIZE_BITS is 30 i.e 1GB which is large and a reduction here
increases memory hotplug granularity, thus improving its agility. A reduced
section size also reduces memory wastage in vmemmmap mapping for sections
with large memory holes. So we try to set the least section size as possible.

A section size bits selection must follow:
(MAX_ORDER - 1 + PAGE_SHIFT) <= SECTION_SIZE_BITS

CONFIG_FORCE_MAX_ZONEORDER is always defined on arm64 and so just following it
would help achieve the smallest section size.

SECTION_SIZE_BITS = (CONFIG_FORCE_MAX_ZONEORDER - 1 + PAGE_SHIFT)

SECTION_SIZE_BITS = 22 (11 - 1 + 12) i.e 4MB   for 4K pages
SECTION_SIZE_BITS = 24 (11 - 1 + 14) i.e 16MB  for 16K pages without THP
SECTION_SIZE_BITS = 25 (12 - 1 + 14) i.e 32MB  for 16K pages with THP
SECTION_SIZE_BITS = 26 (11 - 1 + 16) i.e 64MB  for 64K pages without THP
SECTION_SIZE_BITS = 29 (14 - 1 + 16) i.e 512MB for 64K pages with THP

But there are other problems in reducing SECTION_SIZE_BIT. Reducing it by too
much would over populate /sys/devices/system/memory/ and also consume too many
page->flags bits in the !vmemmap case. Also section size needs to be multiple
of 128MB to have PMD based vmemmap mapping with CONFIG_ARM64_4K_PAGES.

Given these constraints, lets just reduce the section size to 128MB for 4K
and 16K base page size configs, and to 512MB for 64K base page size config.

Signed-off-by: Sudarshan Rajagopalan <sudaraja@codeaurora.org>
Suggested-by: Anshuman Khandual <anshuman.khandual@arm.com>
Suggested-by: David Hildenbrand <david@redhat.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Anshuman Khandual <anshuman.khandual@arm.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Mike Rapoport <rppt@linux.ibm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Logan Gunthorpe <logang@deltatee.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Steven Price <steven.price@arm.com>
Cc: Suren Baghdasaryan <surenb@google.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Acked-by: Mike Rapoport <rppt@linux.ibm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/43843c5e092bfe3ec4c41e3c8c78a7ee35b69bb0.1611206601.git.sudaraja@codeaurora.org
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: Add support for SMCCC TRNG entropy source
Andre Przywara [Wed, 6 Jan 2021 10:34:52 +0000 (10:34 +0000)]
arm64: Add support for SMCCC TRNG entropy source

The ARM architected TRNG firmware interface, described in ARM spec
DEN0098, defines an ARM SMCCC based interface to a true random number
generator, provided by firmware.
This can be discovered via the SMCCC >=v1.1 interface, and provides
up to 192 bits of entropy per call.

Hook this SMC call into arm64's arch_get_random_*() implementation,
coming to the rescue when the CPU does not implement the ARM v8.5 RNG
system registers.

For the detection, we piggy back on the PSCI/SMCCC discovery (which gives
us the conduit to use (hvc/smc)), then try to call the
ARM_SMCCC_TRNG_VERSION function, which returns -1 if this interface is
not implemented.

Reviewed-by: Mark Brown <broonie@kernel.org>
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
3 years agofirmware: smccc: Introduce SMCCC TRNG framework
Andre Przywara [Wed, 6 Jan 2021 10:34:50 +0000 (10:34 +0000)]
firmware: smccc: Introduce SMCCC TRNG framework

The ARM DEN0098 document describe an SMCCC based firmware service to
deliver hardware generated random numbers. Its existence is advertised
according to the SMCCC v1.1 specification.

Add a (dummy) call to probe functions implemented in each architecture
(ARM and arm64), to determine the existence of this interface.
For now this return false, but this will be overwritten by each
architecture's support patch.

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Reviewed-by: Sudeep Holla <sudeep.holla@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
3 years agorandom: avoid arch_get_random_seed_long() when collecting IRQ randomness
Ard Biesheuvel [Thu, 5 Nov 2020 15:29:44 +0000 (16:29 +0100)]
random: avoid arch_get_random_seed_long() when collecting IRQ randomness

When reseeding the CRNG periodically, arch_get_random_seed_long() is
called to obtain entropy from an architecture specific source if one
is implemented. In most cases, these are special instructions, but in
some cases, such as on ARM, we may want to back this using firmware
calls, which are considerably more expensive.

Another call to arch_get_random_seed_long() exists in the CRNG driver,
in add_interrupt_randomness(), which collects entropy by capturing
inter-interrupt timing and relying on interrupt jitter to provide
random bits. This is done by keeping a per-CPU state, and mixing in
the IRQ number, the cycle counter and the return address every time an
interrupt is taken, and mixing this per-CPU state into the entropy pool
every 64 invocations, or at least once per second. The entropy that is
gathered this way is credited as 1 bit of entropy. Every time this
happens, arch_get_random_seed_long() is invoked, and the result is
mixed in as well, and also credited with 1 bit of entropy.

This means that arch_get_random_seed_long() is called at least once
per second on every CPU, which seems excessive, and doesn't really
scale, especially in a virtualization scenario where CPUs may be
oversubscribed: in cases where arch_get_random_seed_long() is backed
by an instruction that actually goes back to a shared hardware entropy
source (such as RNDRRS on ARM), we will end up hitting it hundreds of
times per second.

So let's drop the call to arch_get_random_seed_long() from
add_interrupt_randomness(), and instead, rely on crng_reseed() to call
the arch hook to get random seed material from the platform.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Andre Przywara <andre.przywara@arm.com>
Tested-by: Andre Przywara <andre.przywara@arm.com>
Reviewed-by: Eric Biggers <ebiggers@google.com>
Acked-by: Marc Zyngier <maz@kernel.org>
Reviewed-by: Jason A. Donenfeld <Jason@zx2c4.com>
Link: https://lore.kernel.org/r/20201105152944.16953-1-ardb@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
3 years agomm: Mark anonymous struct field of 'struct vm_fault' as 'const'
Will Deacon [Thu, 14 Jan 2021 15:44:09 +0000 (15:44 +0000)]
mm: Mark anonymous struct field of 'struct vm_fault' as 'const'

The fields of this struct are only ever read after being initialised, so
mark it 'const' before somebody tries to modify it again. GCC will then
complain (with an error) about modification of these fields after they
have been initialised, although LLVM currently allows them without even
a warning:

https://bugs.llvm.org/show_bug.cgi?id=48755

Hopefully, future versions of LLVM will emit a warning.

Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Will Deacon <will@kernel.org>
3 years agomm: Use static initialisers for immutable fields of 'struct vm_fault'
Will Deacon [Thu, 14 Jan 2021 15:42:14 +0000 (15:42 +0000)]
mm: Use static initialisers for immutable fields of 'struct vm_fault'

In preparation for const-ifying the anonymous struct field of
'struct vm_fault', ensure that it is initialised using designated
initialisers.

Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Reviewed-by: Nick Desaulniers <ndesaulniers@google.com>
Signed-off-by: Will Deacon <will@kernel.org>
3 years agomm: Avoid modifying vmf.address in __collapse_huge_page_swapin()
Will Deacon [Thu, 14 Jan 2021 15:33:49 +0000 (15:33 +0000)]
mm: Avoid modifying vmf.address in __collapse_huge_page_swapin()

In preparation for const-ifying the anonymous struct field of
'struct vm_fault', rework __collapse_huge_page_swapin() to avoid
continuously updating vmf.address and instead populate a new
'struct vm_fault' on the stack for each page being processed.

Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Will Deacon <will@kernel.org>
3 years agomm: Pass 'address' to map to do_set_pte() and drop FAULT_FLAG_PREFAULT
Will Deacon [Thu, 14 Jan 2021 15:24:19 +0000 (15:24 +0000)]
mm: Pass 'address' to map to do_set_pte() and drop FAULT_FLAG_PREFAULT

Rather than modifying the 'address' field of the 'struct vm_fault'
passed to do_set_pte(), leave that to identify the real faulting address
and pass in the virtual address to be mapped by the new pte as a
separate argument.

This makes FAULT_FLAG_PREFAULT redundant, as a prefault entry can be
identified simply by comparing the new address parameter with the
faulting address, so remove the redundant flag at the same time.

Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Will Deacon <will@kernel.org>
3 years agomm: Move immutable fields of 'struct vm_fault' into anonymous struct
Will Deacon [Wed, 20 Jan 2021 14:34:23 +0000 (14:34 +0000)]
mm: Move immutable fields of 'struct vm_fault' into anonymous struct

'struct vm_fault' contains both information about the fault being
serviced alongside mutable fields contributing to the state of the
fault-handling logic. Unfortunately, the distinction between the two is
not clear-cut, and a number of callers end up manipulating the structure
temporarily before restoring it when returning.

Try to clean this up by moving the immutable fault information into an
anonymous struct, which will later be marked as 'const'. Ideally, the
'flags' field would be part of the new structure too, but it seems as
though the ->page_mkwrite() path is not ready for this yet.

Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
Link: https://lore.kernel.org/r/CAHk-=whYs9XsO88iqJzN6NC=D-dp2m0oYXuOoZ=eWnvv=5OA+w@mail.gmail.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoperf: Constify static struct attribute_group
Rikard Falkeborn [Sun, 17 Jan 2021 21:28:47 +0000 (22:28 +0100)]
perf: Constify static struct attribute_group

The only usage is to put their addresses in an array of pointers to
const struct attribute group. Make them const to allow the compiler
to put them in read-only memory.

Signed-off-by: Rikard Falkeborn <rikard.falkeborn@gmail.com>
Link: https://lore.kernel.org/r/20210117212847.21319-5-rikard.falkeborn@gmail.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoperf: hisi: Constify static struct attribute_group
Rikard Falkeborn [Sun, 17 Jan 2021 21:28:46 +0000 (22:28 +0100)]
perf: hisi: Constify static struct attribute_group

The only usage is to put their addresses in an array of pointers to
const struct attribute group. Make them const to allow the compiler
to put them in read-only memory.

Signed-off-by: Rikard Falkeborn <rikard.falkeborn@gmail.com>
Link: https://lore.kernel.org/r/20210117212847.21319-4-rikard.falkeborn@gmail.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoperf/imx_ddr: Constify static struct attribute_group
Rikard Falkeborn [Sun, 17 Jan 2021 21:28:45 +0000 (22:28 +0100)]
perf/imx_ddr: Constify static struct attribute_group

The only usage is to put their addresses in an array of pointers to
const struct attribute group. Make them const to allow the compiler
to put them in read-only memory.

Signed-off-by: Rikard Falkeborn <rikard.falkeborn@gmail.com>
Link: https://lore.kernel.org/r/20210117212847.21319-3-rikard.falkeborn@gmail.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoperf: qcom: Constify static struct attribute_group
Rikard Falkeborn [Sun, 17 Jan 2021 21:28:44 +0000 (22:28 +0100)]
perf: qcom: Constify static struct attribute_group

The only usage is to put their addresses in an array of pointers to
const struct attribute group. Make them const to allow the compiler
to put them in read-only memory.

Signed-off-by: Rikard Falkeborn <rikard.falkeborn@gmail.com>
Link: https://lore.kernel.org/r/20210117212847.21319-2-rikard.falkeborn@gmail.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agodrivers/perf: Add support for ARMv8.3-SPE
Wei Li [Thu, 3 Dec 2020 14:16:09 +0000 (22:16 +0800)]
drivers/perf: Add support for ARMv8.3-SPE

Armv8.3 extends the SPE by adding:
- Alignment field in the Events packet, and filtering on this event
  using PMSEVFR_EL1.
- Support for the Scalable Vector Extension (SVE).

The main additions for SVE are:
- Recording the vector length for SVE operations in the Operation Type
  packet. It is not possible to filter on vector length.
- Incomplete predicate and empty predicate fields in the Events packet,
  and filtering on these events using PMSEVFR_EL1.

Update the check of pmsevfr for empty/partial predicated SVE and
alignment event in SPE driver.

Signed-off-by: Wei Li <liwei391@huawei.com>
Link: https://lore.kernel.org/r/20201203141609.14148-1-liwei391@huawei.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: mm: Implement arch_wants_old_prefaulted_pte()
Will Deacon [Tue, 24 Nov 2020 18:49:26 +0000 (18:49 +0000)]
arm64: mm: Implement arch_wants_old_prefaulted_pte()

On CPUs with hardware AF/DBM, initialising prefaulted PTEs as 'old'
improves vmscan behaviour and does not appear to introduce any overhead
elsewhere.

Implement arch_wants_old_prefaulted_pte() to return 'true' if we detect
hardware access flag support at runtime. This can be extended in future
based on MIDR matching if necessary.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
3 years agomm: Allow architectures to request 'old' entries when prefaulting
Will Deacon [Tue, 24 Nov 2020 18:48:26 +0000 (18:48 +0000)]
mm: Allow architectures to request 'old' entries when prefaulting

Commit 5c0a85fad949 ("mm: make faultaround produce old ptes") changed
the "faultaround" behaviour to initialise prefaulted PTEs as 'old',
since this avoids vmscan wrongly assuming that they are hot, despite
having never been explicitly accessed by userspace. The change has been
shown to benefit numerous arm64 micro-architectures (with hardware
access flag) running Android, where both application launch latency and
direct reclaim time are significantly reduced (by 10%+ and ~80%
respectively).

Unfortunately, commit 315d09bf30c2 ("Revert "mm: make faultaround
produce old ptes"") reverted the change due to it being identified as
the cause of a ~6% regression in unixbench on x86. Experiments on a
variety of recent arm64 micro-architectures indicate that unixbench is
not affected by the original commit, which appears to yield a 0-1%
performance improvement.

Since one size does not fit all for the initial state of prefaulted
PTEs, introduce arch_wants_old_prefaulted_pte(), which allows an
architecture to opt-in to 'old' prefaulted PTEs at runtime based on
whatever criteria it may have.

Cc: Jan Kara <jack@suse.cz>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Reported-by: Vinayak Menon <vinmenon@codeaurora.org>
Signed-off-by: Will Deacon <will@kernel.org>
3 years agomm: Cleanup faultaround and finish_fault() codepaths
Kirill A. Shutemov [Sat, 19 Dec 2020 12:19:23 +0000 (15:19 +0300)]
mm: Cleanup faultaround and finish_fault() codepaths

alloc_set_pte() has two users with different requirements: in the
faultaround code, it called from an atomic context and PTE page table
has to be preallocated. finish_fault() can sleep and allocate page table
as needed.

PTL locking rules are also strange, hard to follow and overkill for
finish_fault().

Let's untangle the mess. alloc_set_pte() has gone now. All locking is
explicit.

The price is some code duplication to handle huge pages in faultaround
path, but it should be fine, having overall improvement in readability.

Link: https://lore.kernel.org/r/20201229132819.najtavneutnf7ajp@box
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
[will: s/from from/from/ in comment; spotted by willy]
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: topology: Make AMUs work with modular cpufreq drivers
Viresh Kumar [Fri, 8 Jan 2021 11:16:53 +0000 (16:46 +0530)]
arm64: topology: Make AMUs work with modular cpufreq drivers

The AMU counters won't get used today if the cpufreq driver is built as
a module as the amu core requires everything to be ready by late init.

Fix that properly by registering for cpufreq policy notifier. Note that
the amu core don't have any cpufreq dependency after the first time
CPUFREQ_CREATE_POLICY notifier is called for all the CPUs. And so we
don't need to do anything on the CPUFREQ_REMOVE_POLICY notifier. And for
the same reason we check if the CPUs are already parsed in the beginning
of amu_fie_setup() and skip if that is true. Alternatively we can shoot
a work from there to unregister the notifier instead, but that seemed
too much instead of this simple check.

While at it, convert the print message to pr_debug instead of pr_info.

Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Reviewed-by: Ionela Voinescu <ionela.voinescu@arm.com>
Tested-by: Ionela Voinescu <ionela.voinescu@arm.com>
Link: https://lore.kernel.org/r/89c1921334443e133c9c8791b4693607d65ed9f5.1610104461.git.viresh.kumar@linaro.org
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: topology: Reorder init_amu_fie() a bit
Viresh Kumar [Fri, 8 Jan 2021 11:16:52 +0000 (16:46 +0530)]
arm64: topology: Reorder init_amu_fie() a bit

This patch does a couple of optimizations in init_amu_fie(), like early
exits from paths where we don't need to continue any further, avoid the
enable/disable dance, moving the calls to
topology_scale_freq_invariant() just when we need them, instead of at
the top of the routine, and avoiding calling it for the third time.

Reviewed-by: Ionela Voinescu <ionela.voinescu@arm.com>
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Tested-by: Ionela Voinescu <ionela.voinescu@arm.com>
Link: https://lore.kernel.org/r/a732e71ab9ec28c354eb28dd898c9b47d490863f.1610104461.git.viresh.kumar@linaro.org
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: topology: Avoid the have_policy check
Viresh Kumar [Fri, 8 Jan 2021 11:16:51 +0000 (16:46 +0530)]
arm64: topology: Avoid the have_policy check

Every time I have stumbled upon this routine, I get confused with the
way 'have_policy' is used and I have to dig in to understand why is it
so. Here is an attempt to make it easier to understand, and hopefully it
is an improvement.

The 'have_policy' check was just an optimization to avoid writing
to amu_fie_cpus in case we don't have to, but that optimization itself
is creating more confusion than the real work. Lets just do that if all
the CPUs support AMUs. It is much cleaner that way.

Reviewed-by: Ionela Voinescu <ionela.voinescu@arm.com>
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Tested-by: Ionela Voinescu <ionela.voinescu@arm.com>
Link: https://lore.kernel.org/r/c125766c4be93461772015ac7c9a6ae45d5756f6.1610104461.git.viresh.kumar@linaro.org
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: remove EL0 exception frame record
Mark Rutland [Wed, 13 Jan 2021 17:31:55 +0000 (17:31 +0000)]
arm64: remove EL0 exception frame record

When entering an exception from EL0, the entry code creates a synthetic
frame record with a NULL PC. This was used by the code introduced in
commit:

  7326749801396105 ("arm64: unwind: reference pt_regs via embedded stack frame")

... to discover exception entries on the stack and dump the associated
pt_regs. Since the NULL PC was undesirable for the stacktrace, we added
a special case to unwind_frame() to prevent the NULL PC from being
logged.

Since commit:

  a25ffd3a6302a678 ("arm64: traps: Don't print stack or raw PC/LR values in backtraces")

... we no longer try to dump the pt_regs as part of a stacktrace, and
hence no longer need the synthetic exception record.

This patch removes the synthetic exception record and the associated
special case in unwind_frame(). Instead, EL0 exceptions set the FP to
NULL, as is the case for other terminal records (e.g. when a kernel
thread starts). The synthetic record for exceptions from EL1 is
retrained as this has useful unwind information for the interrupted
context.

To make the terminal case a bit clearer, an explicit check is added to
the start of unwind_frame(). This would otherwise be caught implicitly
by the on_accessible_stack() checks.

Reported-by: Mark Brown <broonie@kernel.org>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20210113173155.43063-1-broonie@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
3 years agoarm64: mte: style: Simplify bool comparison
YANG LI [Mon, 11 Jan 2021 09:35:37 +0000 (17:35 +0800)]
arm64: mte: style: Simplify bool comparison

Fix the following coccicheck warning:
./tools/testing/selftests/arm64/mte/check_buffer_fill.c:84:12-35:
WARNING: Comparison to bool

Signed-off-by: YANG LI <abaci-bugfix@linux.alibaba.com>
Reported-by: Abaci Robot<abaci@linux.alibaba.com>
Link: https://lore.kernel.org/r/1610357737-68678-1-git-send-email-abaci-bugfix@linux.alibaba.com
Signed-off-by: Will Deacon <will@kernel.org>
3 years agofirmware: smccc: Add SMCCC TRNG function call IDs
Ard Biesheuvel [Wed, 6 Jan 2021 10:34:49 +0000 (10:34 +0000)]
firmware: smccc: Add SMCCC TRNG function call IDs

The ARM architected TRNG firmware interface, described in ARM spec
DEN0098, define an ARM SMCCC based interface to a true random number
generator, provided by firmware.

Add the definitions of the SMCCC functions as defined by the spec.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Reviewed-by: Sudeep Holla <sudeep.holla@arm.com>
Link: https://lore.kernel.org/r/20210106103453.152275-2-andre.przywara@arm.com
Signed-off-by: Will Deacon <will@kernel.org>