powerpc/non-smp: Unconditionaly call smp_mb() on switch_mm
authorChristophe Leroy <christophe.leroy@csgroup.eu>
Mon, 5 Jul 2021 12:00:50 +0000 (12:00 +0000)
committerMichael Ellerman <mpe@ellerman.id.au>
Tue, 10 Aug 2021 13:14:55 +0000 (23:14 +1000)
Commit 3ccfebedd8cf ("powerpc, membarrier: Skip memory barrier in
switch_mm()") added some logic to skip the smp_mb() in
switch_mm_irqs_off() before the call to switch_mmu_context().

However, on non SMP smp_mb() is just a compiler barrier and doing
it unconditionaly is simpler than the logic used to check whether the
barrier is needed or not.

After the patch:

00000000 <switch_mm_irqs_off>:
...
   c: 7c 04 18 40  cmplw   r4,r3
  10: 81 24 00 24  lwz     r9,36(r4)
  14: 91 25 04 c8  stw     r9,1224(r5)
  18: 4d 82 00 20  beqlr
  1c: 48 00 00 00  b       1c <switch_mm_irqs_off+0x1c>
1c: R_PPC_REL24 switch_mmu_context

Before the patch:

00000000 <switch_mm_irqs_off>:
...
   c: 7c 04 18 40  cmplw   r4,r3
  10: 81 24 00 24  lwz     r9,36(r4)
  14: 91 25 04 c8  stw     r9,1224(r5)
  18: 4d 82 00 20  beqlr
  1c: 81 24 00 28  lwz     r9,40(r4)
  20: 71 29 00 0a  andi.   r9,r9,10
  24: 40 82 00 34  bne     58 <switch_mm_irqs_off+0x58>
  28: 48 00 00 00  b       28 <switch_mm_irqs_off+0x28>
28: R_PPC_REL24 switch_mmu_context
...
  58: 2c 03 00 00  cmpwi   r3,0
  5c: 41 82 ff cc  beq     28 <switch_mm_irqs_off+0x28>
  60: 48 00 00 00  b       60 <switch_mm_irqs_off+0x60>
60: R_PPC_REL24 switch_mmu_context

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/e9d501da0c59f60ca767b1b3ea4603fce6d02b9e.1625486440.git.christophe.leroy@csgroup.eu
arch/powerpc/include/asm/membarrier.h

index 6e20bb5..de7f791 100644 (file)
@@ -12,7 +12,8 @@ static inline void membarrier_arch_switch_mm(struct mm_struct *prev,
         * when switching from userspace to kernel is not needed after
         * store to rq->curr.
         */
-       if (likely(!(atomic_read(&next->membarrier_state) &
+       if (IS_ENABLED(CONFIG_SMP) &&
+           likely(!(atomic_read(&next->membarrier_state) &
                     (MEMBARRIER_STATE_PRIVATE_EXPEDITED |
                      MEMBARRIER_STATE_GLOBAL_EXPEDITED)) || !prev))
                return;