From: Jan Kara Date: Fri, 1 Apr 2022 10:27:46 +0000 (+0200) Subject: bfq: Drop pointless unlock-lock pair X-Git-Tag: v5.15.73~3375 X-Git-Url: http://review.tizen.org/git/?a=commitdiff_plain;h=a107df383c163fdf9fdf748bc8f1a5e8f00d29c1;p=platform%2Fkernel%2Flinux-rpi.git bfq: Drop pointless unlock-lock pair commit fc84e1f941b91221092da5b3102ec82da24c5673 upstream. In bfq_insert_request() we unlock bfqd->lock only to call trace_block_rq_insert() and then lock bfqd->lock again. This is really pointless since tracing is disabled if we really care about performance and even if the tracepoint is enabled, it is a quick call. CC: stable@vger.kernel.org Tested-by: "yukuai (C)" Signed-off-by: Jan Kara Reviewed-by: Christoph Hellwig Link: https://lore.kernel.org/r/20220401102752.8599-5-jack@suse.cz Signed-off-by: Jens Axboe Signed-off-by: Greg Kroah-Hartman --- diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c index e118359..4ecfcb6 100644 --- a/block/bfq-iosched.c +++ b/block/bfq-iosched.c @@ -6012,11 +6012,8 @@ static void bfq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq, return; } - spin_unlock_irq(&bfqd->lock); - trace_block_rq_insert(rq); - spin_lock_irq(&bfqd->lock); bfqq = bfq_init_rq(rq); if (!bfqq || at_head) { if (at_head)