From: Palmer Dabbelt Date: Thu, 16 Jul 2020 19:38:20 +0000 (-0700) Subject: powerpc/64: Fix an out of date comment about MMIO ordering X-Git-Tag: v5.10.7~1910^2~175 X-Git-Url: http://review.tizen.org/git/?a=commitdiff_plain;h=147c13413c04bc6a2bd76f2503402905e5e98cff;p=platform%2Fkernel%2Flinux-rpi.git powerpc/64: Fix an out of date comment about MMIO ordering This primitive has been renamed, but because it was spelled incorrectly in the first place it must have escaped the fixup patch. As far as I can tell this logic is still correct: smp_mb__after_spinlock() uses the default smp_mb() implementation, which is "sync" rather than "hwsync" but those are the same (though I'm not that familiar with PowerPC). Signed-off-by: Palmer Dabbelt Signed-off-by: Michael Ellerman Link: https://lore.kernel.org/r/20200716193820.1141936-1-palmer@dabbelt.com --- diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S index da85c2511e57..2547c5dac07a 100644 --- a/arch/powerpc/kernel/entry_64.S +++ b/arch/powerpc/kernel/entry_64.S @@ -354,7 +354,7 @@ _GLOBAL(_switch) * kernel/sched/core.c). * * Uncacheable stores in the case of involuntary preemption must - * be taken care of. The smp_mb__before_spin_lock() in __schedule() + * be taken care of. The smp_mb__after_spinlock() in __schedule() * is implemented as hwsync on powerpc, which orders MMIO too. So * long as there is an hwsync in the context switch path, it will * be executed on the source CPU after the task has performed