mm: proc: Invalidate TLB after clearing soft-dirty page state
authorWill Deacon <will@kernel.org>
Wed, 27 Jan 2021 23:53:42 +0000 (23:53 +0000)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Thu, 4 Mar 2021 10:37:45 +0000 (11:37 +0100)
commit61a1f0ad45de5541fae1e78260c81d4e7e307a1c
tree3239c6318cda0a16bc5ce4a3a1b8f0e4af874946
parente3fcff9f45aa82dacad26e5828598340d2742f47
mm: proc: Invalidate TLB after clearing soft-dirty page state

[ Upstream commit 912efa17e5121693dfbadae29768f4144a3f9e62 ]

Since commit 0758cd830494 ("asm-generic/tlb: avoid potential double
flush"), TLB invalidation is elided in tlb_finish_mmu() if no entries
were batched via the tlb_remove_*() functions. Consequently, the
page-table modifications performed by clear_refs_write() in response to
a write to /proc/<pid>/clear_refs do not perform TLB invalidation.
Although this is fine when simply aging the ptes, in the case of
clearing the "soft-dirty" state we can end up with entries where
pte_write() is false, yet a writable mapping remains in the TLB.

Fix this by avoiding the mmu_gather API altogether: managing both the
'tlb_flush_pending' flag on the 'mm_struct' and explicit TLB
invalidation for the sort-dirty path, much like mprotect() does already.

Fixes: 0758cd830494 ("asm-generic/tlb: avoid potential double flush”)
Signed-off-by: Will Deacon <will@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Yu Zhao <yuzhao@google.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Link: https://lkml.kernel.org/r/20210127235347.1402-2-will@kernel.org
Signed-off-by: Sasha Levin <sashal@kernel.org>
fs/proc/task_mmu.c