From: Ming Lei Date: Mon, 27 Mar 2017 12:06:58 +0000 (+0800) Subject: block: block new I/O just after queue is set as dying X-Git-Tag: v4.14-rc1~1015^2~264 X-Git-Url: http://review.tizen.org/git/?a=commitdiff_plain;h=d3cfb2a0ac0b8487d28a1ee207c29617bf6e6820;p=platform%2Fkernel%2Flinux-rpi3.git block: block new I/O just after queue is set as dying Before commit 780db2071a(blk-mq: decouble blk-mq freezing from generic bypassing), the dying flag is checked before entering queue, and Tejun converts the checking into .mq_freeze_depth, and assumes the counter is increased just after dying flag is set. Unfortunately we doesn't do that in blk_set_queue_dying(). This patch calls blk_freeze_queue_start() in blk_set_queue_dying(), so that we can block new I/O coming once the queue is set as dying. Given blk_set_queue_dying() is always called in remove path of block device, and queue will be cleaned up later, we don't need to worry about undoing the counter. Cc: Tejun Heo Reviewed-by: Hannes Reinecke Signed-off-by: Ming Lei Reviewed-by: Johannes Thumshirn Reviewed-by: Bart Van Assche Signed-off-by: Jens Axboe --- diff --git a/block/blk-core.c b/block/blk-core.c index 7b66f76..43b7d06 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -500,6 +500,13 @@ void blk_set_queue_dying(struct request_queue *q) queue_flag_set(QUEUE_FLAG_DYING, q); spin_unlock_irq(q->queue_lock); + /* + * When queue DYING flag is set, we need to block new req + * entering queue, so we call blk_freeze_queue_start() to + * prevent I/O from crossing blk_queue_enter(). + */ + blk_freeze_queue_start(q); + if (q->mq_ops) blk_mq_wake_waiters(q); else { @@ -672,9 +679,9 @@ int blk_queue_enter(struct request_queue *q, bool nowait) /* * read pair of barrier in blk_freeze_queue_start(), * we need to order reading __PERCPU_REF_DEAD flag of - * .q_usage_counter and reading .mq_freeze_depth, - * otherwise the following wait may never return if the - * two reads are reordered. + * .q_usage_counter and reading .mq_freeze_depth or + * queue dying flag, otherwise the following wait may + * never return if the two reads are reordered. */ smp_rmb();