block: let io_schedule() flush the plug inline
authorJens Axboe <jaxboe@fusionio.com>
Sat, 16 Apr 2011 11:27:55 +0000 (13:27 +0200)
committerJens Axboe <jaxboe@fusionio.com>
Sat, 16 Apr 2011 11:27:55 +0000 (13:27 +0200)
Linus correctly observes that the most important dispatch cases
are now done from kblockd, this isn't ideal for latency reasons.
The original reason for switching dispatches out-of-line was to
avoid too deep a stack, so by _only_ letting the "accidental"
flush directly in schedule() be guarded by offload to kblockd,
we should be able to get the best of both worlds.

So add a blk_schedule_flush_plug() that offloads to kblockd,
and only use that from the schedule() path.

Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
include/linux/blkdev.h
kernel/sched.c

index 1c76506..ec0357d 100644 (file)
@@ -872,6 +872,14 @@ static inline void blk_flush_plug(struct task_struct *tsk)
        struct blk_plug *plug = tsk->plug;
 
        if (plug)
+               blk_flush_plug_list(plug, false);
+}
+
+static inline void blk_schedule_flush_plug(struct task_struct *tsk)
+{
+       struct blk_plug *plug = tsk->plug;
+
+       if (plug)
                blk_flush_plug_list(plug, true);
 }
 
@@ -1317,6 +1325,11 @@ static inline void blk_flush_plug(struct task_struct *task)
 {
 }
 
+static inline void blk_schedule_flush_plug(struct task_struct *task)
+{
+}
+
+
 static inline bool blk_needs_flush_plug(struct task_struct *tsk)
 {
        return false;
index a187c3f..312f8b9 100644 (file)
@@ -4118,7 +4118,7 @@ need_resched:
                         */
                        if (blk_needs_flush_plug(prev)) {
                                raw_spin_unlock(&rq->lock);
-                               blk_flush_plug(prev);
+                               blk_schedule_flush_plug(prev);
                                raw_spin_lock(&rq->lock);
                        }
                }