x86/mm/tlb: Revert the recent lazy TLB patches
authorPeter Zijlstra <peterz@infradead.org>
Wed, 22 Aug 2018 15:30:13 +0000 (17:30 +0200)
committerLinus Torvalds <torvalds@linux-foundation.org>
Thu, 23 Aug 2018 01:22:04 +0000 (18:22 -0700)
commit52a288c736669851f166544d4a0b93e1090d7e9b
treec65ef4f76102052b4ffe0ec40bec83578ea71c2f
parent815f0ddb346c196018d4d8f8f55c12b83da1de3f
x86/mm/tlb: Revert the recent lazy TLB patches

Revert commits:

  95b0e6357d3e x86/mm/tlb: Always use lazy TLB mode
  64482aafe55f x86/mm/tlb: Only send page table free TLB flush to lazy TLB CPUs
  ac0315896970 x86/mm/tlb: Make lazy TLB mode lazier
  61d0beb5796a x86/mm/tlb: Restructure switch_mm_irqs_off()
  2ff6ddf19c0e x86/mm/tlb: Leave lazy TLB mode at page table free time

In order to simplify the TLB invalidate fixes for x86 and unify the
parts that need backporting.  We'll try again later.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Rik van Riel <riel@surriel.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
arch/x86/include/asm/tlbflush.h
arch/x86/mm/tlb.c
include/asm-generic/tlb.h
mm/memory.c