x86 rwsem: avoid taking slow path when stealing write lock
authorMichel Lespinasse <walken@google.com>
Tue, 7 May 2013 13:46:01 +0000 (06:46 -0700)
committerLinus Torvalds <torvalds@linux-foundation.org>
Tue, 7 May 2013 14:20:17 +0000 (07:20 -0700)
commita31a369b07cf306ae1de0b2d4a52c3821a570bf6
tree935a847603f3079bb3fae78a544c6bbf79aedbef
parent25c39325968bbcebe6cd2a1991228c9dfb48d655
x86 rwsem: avoid taking slow path when stealing write lock

modify __down_write[_nested] and __down_write_trylock to grab the write
lock whenever the active count is 0, even if there are queued waiters
(they must be writers pending wakeup, since the active count is 0).

Note that this is an optimization only; architectures without this
optimization will still work fine:

- __down_write() would take the slow path which would take the wait_lock
  and then try stealing the lock (as in the spinlocked rwsem implementation)

- __down_write_trylock() would fail, but callers must be ready to deal
  with that - since there are some writers pending wakeup, they could
  have raced with us and obtained the lock before we steal it.

Signed-off-by: Michel Lespinasse <walken@google.com>
Reviewed-by: Peter Hurley <peter@hurleysoftware.com>
Acked-by: Davidlohr Bueso <davidlohr.bueso@hp.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
arch/x86/include/asm/rwsem.h