arm64: mm: Fix TLBI vs ASID rollover
authorWill Deacon <will@kernel.org>
Fri, 6 Aug 2021 11:31:04 +0000 (12:31 +0100)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Sat, 18 Sep 2021 11:40:08 +0000 (13:40 +0200)
commit2d3a9dff763faa5b0182f3ea4e3f81d25bd1423c
tree045cd5fba19991825ec75dc19c0a0bb00a0775ff
parent01e6c64bbc5d3553421cf217afef1ae47744edda
arm64: mm: Fix TLBI vs ASID rollover

commit 5e10f9887ed85d4f59266d5c60dd09be96b5dbd4 upstream.

When switching to an 'mm_struct' for the first time following an ASID
rollover, a new ASID may be allocated and assigned to 'mm->context.id'.
This reassignment can happen concurrently with other operations on the
mm, such as unmapping pages and subsequently issuing TLB invalidation.

Consequently, we need to ensure that (a) accesses to 'mm->context.id'
are atomic and (b) all page-table updates made prior to a TLBI using the
old ASID are guaranteed to be visible to CPUs running with the new ASID.

This was found by inspection after reviewing the VMID changes from
Shameer but it looks like a real (yet hard to hit) bug.

Cc: <stable@vger.kernel.org>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Jade Alglave <jade.alglave@arm.com>
Cc: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
Signed-off-by: Will Deacon <will@kernel.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20210806113109.2475-2-will@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
arch/arm64/include/asm/mmu.h
arch/arm64/include/asm/tlbflush.h