sched: introduce this_rq_lock_irq()
authorJohannes Weiner <hannes@cmpxchg.org>
Fri, 26 Oct 2018 22:06:23 +0000 (15:06 -0700)
committerLinus Torvalds <torvalds@linux-foundation.org>
Fri, 26 Oct 2018 23:26:32 +0000 (16:26 -0700)
do_sched_yield() disables IRQs, looks up this_rq() and locks it.  The next
patch is adding another site with the same pattern, so provide a
convenience function for it.

Link: http://lkml.kernel.org/r/20180828172258.3185-8-hannes@cmpxchg.org
Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Tested-by: Suren Baghdasaryan <surenb@google.com>
Tested-by: Daniel Drake <drake@endlessm.com>
Cc: Christopher Lameter <cl@linux.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Johannes Weiner <jweiner@fb.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Enderborg <peter.enderborg@sony.com>
Cc: Randy Dunlap <rdunlap@infradead.org>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Vinayak Menon <vinmenon@codeaurora.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
kernel/sched/core.c
kernel/sched/sched.h

index 2e696b0..f3efef3 100644 (file)
@@ -4933,9 +4933,7 @@ static void do_sched_yield(void)
        struct rq_flags rf;
        struct rq *rq;
 
-       local_irq_disable();
-       rq = this_rq();
-       rq_lock(rq, &rf);
+       rq = this_rq_lock_irq(&rf);
 
        schedstat_inc(rq->yld_count);
        current->sched_class->yield_task(rq);
index 65a75b3..1de189b 100644 (file)
@@ -1157,6 +1157,18 @@ rq_unlock(struct rq *rq, struct rq_flags *rf)
        raw_spin_unlock(&rq->lock);
 }
 
+static inline struct rq *
+this_rq_lock_irq(struct rq_flags *rf)
+       __acquires(rq->lock)
+{
+       struct rq *rq;
+
+       local_irq_disable();
+       rq = this_rq();
+       rq_lock(rq, rf);
+       return rq;
+}
+
 #ifdef CONFIG_NUMA
 enum numa_topology_type {
        NUMA_DIRECT,