powerpc/64s: Disable preemption in hash lazy mmu mode
authorNicholas Piggin <npiggin@gmail.com>
Thu, 13 Oct 2022 15:16:45 +0000 (01:16 +1000)
committerMichael Ellerman <mpe@ellerman.id.au>
Tue, 18 Oct 2022 11:46:18 +0000 (22:46 +1100)
apply_to_page_range on kernel pages does not disable preemption, which
is a requirement for hash's lazy mmu mode, which keeps track of the
TLBs to flush with a per-cpu array.

Reported-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Tested-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20221013151647.1857994-1-npiggin@gmail.com
arch/powerpc/include/asm/book3s/64/tlbflush-hash.h

index fab8332..751921f 100644 (file)
@@ -32,6 +32,11 @@ static inline void arch_enter_lazy_mmu_mode(void)
 
        if (radix_enabled())
                return;
+       /*
+        * apply_to_page_range can call us this preempt enabled when
+        * operating on kernel page tables.
+        */
+       preempt_disable();
        batch = this_cpu_ptr(&ppc64_tlb_batch);
        batch->active = 1;
 }
@@ -47,6 +52,7 @@ static inline void arch_leave_lazy_mmu_mode(void)
        if (batch->index)
                __flush_tlb_pending(batch);
        batch->active = 0;
+       preempt_enable();
 }
 
 #define arch_flush_lazy_mmu_mode()      do {} while (0)