block: kill extra rcu lock/unlock in queue enter
authorPavel Begunkov <asml.silence@gmail.com>
Thu, 21 Oct 2021 13:30:52 +0000 (14:30 +0100)
committerJens Axboe <axboe@kernel.dk>
Thu, 21 Oct 2021 14:37:26 +0000 (08:37 -0600)
blk_try_enter_queue() already takes rcu_read_lock/unlock, so we can
avoid the second pair in percpu_ref_tryget_live(), use a newly added
percpu_ref_tryget_live_rcu().

As rcu_read_lock/unlock imply barrier()s, it's pretty noticeable,
especially for for !CONFIG_PREEMPT_RCU (default for some distributions),
where __rcu_read_lock/unlock() are not inlined.

3.20%  io_uring  [kernel.vmlinux]  [k] __rcu_read_unlock
3.05%  io_uring  [kernel.vmlinux]  [k] __rcu_read_lock

2.52%  io_uring  [kernel.vmlinux]  [k] __rcu_read_unlock
2.28%  io_uring  [kernel.vmlinux]  [k] __rcu_read_lock

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/6b11c67ea495ed9d44f067622d852de4a510ce65.1634822969.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
block/blk-core.c

index dfa1993..fd389a1 100644 (file)
@@ -389,7 +389,7 @@ EXPORT_SYMBOL(blk_cleanup_queue);
 static bool blk_try_enter_queue(struct request_queue *q, bool pm)
 {
        rcu_read_lock();
-       if (!percpu_ref_tryget_live(&q->q_usage_counter))
+       if (!percpu_ref_tryget_live_rcu(&q->q_usage_counter))
                goto fail;
 
        /*