s390/ctl_reg: make __ctl_load a full memory barrier
authorHeiko Carstens <heiko.carstens@de.ibm.com>
Wed, 28 Dec 2016 10:33:48 +0000 (11:33 +0100)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Wed, 5 Jul 2017 12:40:26 +0000 (14:40 +0200)
commit0e9867b7113c56b367f2e753cd411cf7cef0d2ec
treeb0a67b38e4ef9cd65428dac0a092712a841c9306
parent9d00195bc0afa0252b9cdb157eb4ed1e13631bc6
s390/ctl_reg: make __ctl_load a full memory barrier

[ Upstream commit e991c24d68b8c0ba297eeb7af80b1e398e98c33f ]

We have quite a lot of code that depends on the order of the
__ctl_load inline assemby and subsequent memory accesses, like
e.g. disabling lowcore protection and the writing to lowcore.

Since the __ctl_load macro does not have memory barrier semantics, nor
any other dependencies the compiler is, theoretically, free to shuffle
code around. Or in other words: storing to lowcore could happen before
lowcore protection is disabled.

In order to avoid this class of potential bugs simply add a full
memory barrier to the __ctl_load macro.

Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
arch/s390/include/asm/ctl_reg.h