block, bfq: do not plug I/O for bfq_queues with no proc refs
authorPaolo Valente <paolo.valente@linaro.org>
Mon, 3 Feb 2020 10:40:54 +0000 (11:40 +0100)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Mon, 24 Feb 2020 07:36:31 +0000 (08:36 +0100)
[ Upstream commit f718b093277df582fbf8775548a4f163e664d282 ]

Commit 478de3380c1c ("block, bfq: deschedule empty bfq_queues not
referred by any process") fixed commit 3726112ec731 ("block, bfq:
re-schedule empty queues if they deserve I/O plugging") by
descheduling an empty bfq_queue when it remains with not process
reference. Yet, this still left a case uncovered: an empty bfq_queue
with not process reference that remains in service. This happens for
an in-service sync bfq_queue that is deemed to deserve I/O-dispatch
plugging when it remains empty. Yet no new requests will arrive for
such a bfq_queue if no process sends requests to it any longer. Even
worse, the bfq_queue may happen to be prematurely freed while still in
service (because there may remain no reference to it any longer).

This commit solves this problem by preventing I/O dispatch from being
plugged for the in-service bfq_queue, if the latter has no process
reference (the bfq_queue is then prevented from remaining in service).

Fixes: 3726112ec731 ("block, bfq: re-schedule empty queues if they deserve I/O plugging")
Tested-by: Oleksandr Natalenko <oleksandr@natalenko.name>
Reported-by: Patrick Dung <patdung100@gmail.com>
Tested-by: Patrick Dung <patdung100@gmail.com>
Signed-off-by: Paolo Valente <paolo.valente@linaro.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Sasha Levin <sashal@kernel.org>
block/bfq-iosched.c

index 0c62144..5498d05 100644 (file)
@@ -3444,6 +3444,10 @@ static void bfq_dispatch_remove(struct request_queue *q, struct request *rq)
 static bool idling_needed_for_service_guarantees(struct bfq_data *bfqd,
                                                 struct bfq_queue *bfqq)
 {
+       /* No point in idling for bfqq if it won't get requests any longer */
+       if (unlikely(!bfqq_process_refs(bfqq)))
+               return false;
+
        return (bfqq->wr_coeff > 1 &&
                (bfqd->wr_busy_queues <
                 bfq_tot_busy_queues(bfqd) ||
@@ -4077,6 +4081,10 @@ static bool idling_boosts_thr_without_issues(struct bfq_data *bfqd,
                bfqq_sequential_and_IO_bound,
                idling_boosts_thr;
 
+       /* No point in idling for bfqq if it won't get requests any longer */
+       if (unlikely(!bfqq_process_refs(bfqq)))
+               return false;
+
        bfqq_sequential_and_IO_bound = !BFQQ_SEEKY(bfqq) &&
                bfq_bfqq_IO_bound(bfqq) && bfq_bfqq_has_short_ttime(bfqq);
 
@@ -4170,6 +4178,10 @@ static bool bfq_better_to_idle(struct bfq_queue *bfqq)
        struct bfq_data *bfqd = bfqq->bfqd;
        bool idling_boosts_thr_with_no_issue, idling_needed_for_service_guar;
 
+       /* No point in idling for bfqq if it won't get requests any longer */
+       if (unlikely(!bfqq_process_refs(bfqq)))
+               return false;
+
        if (unlikely(bfqd->strict_guarantees))
                return true;