From 95f5d10f99b4968007373501f2b47eb923abf861 Mon Sep 17 00:00:00 2001 From: Juergen Gross Date: Mon, 1 Oct 2018 07:57:42 +0200 Subject: [PATCH] xen: fix race in xen_qlock_wait() commit 2ac2a7d4d9ff4e01e36f9c3d116582f6f655ab47 upstream. In the following situation a vcpu waiting for a lock might not be woken up from xen_poll_irq(): CPU 1: CPU 2: CPU 3: takes a spinlock tries to get lock -> xen_qlock_wait() frees the lock -> xen_qlock_kick(cpu2) -> xen_clear_irq_pending() takes lock again tries to get lock -> *lock = _Q_SLOW_VAL -> *lock == _Q_SLOW_VAL ? -> xen_poll_irq() frees the lock -> xen_qlock_kick(cpu3) And cpu 2 will sleep forever. This can be avoided easily by modifying xen_qlock_wait() to call xen_poll_irq() only if the related irq was not pending and to call xen_clear_irq_pending() only if it was pending. Cc: stable@vger.kernel.org Cc: Waiman.Long@hp.com Cc: peterz@infradead.org Signed-off-by: Juergen Gross Reviewed-by: Jan Beulich Signed-off-by: Juergen Gross Signed-off-by: Greg Kroah-Hartman --- arch/x86/xen/spinlock.c | 15 +++++---------- 1 file changed, 5 insertions(+), 10 deletions(-) diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c index 3d6e006..eb750a18 100644 --- a/arch/x86/xen/spinlock.c +++ b/arch/x86/xen/spinlock.c @@ -45,17 +45,12 @@ static void xen_qlock_wait(u8 *byte, u8 val) if (irq == -1) return; - /* clear pending */ - xen_clear_irq_pending(irq); - barrier(); + /* If irq pending already clear it and return. */ + if (xen_test_irq_pending(irq)) { + xen_clear_irq_pending(irq); + return; + } - /* - * We check the byte value after clearing pending IRQ to make sure - * that we won't miss a wakeup event because of the clearing. - * - * The sync_clear_bit() call in xen_clear_irq_pending() is atomic. - * So it is effectively a memory barrier for x86. - */ if (READ_ONCE(*byte) != val) return; -- 2.7.4