sched: Core-wide rq->lock
authorPeter Zijlstra <peterz@infradead.org>
Tue, 17 Nov 2020 23:19:34 +0000 (18:19 -0500)
committerPeter Zijlstra <peterz@infradead.org>
Wed, 12 May 2021 09:43:27 +0000 (11:43 +0200)
commit9edeaea1bc452372718837ed2ba775811baf1ba1
treea4c002b7be5b284c0f7d2bd6647e602b482fc4aa
parentd66f1b06b5b438cd20ba3664b8eef1f9c79e84bf
sched: Core-wide rq->lock

Introduce the basic infrastructure to have a core wide rq->lock.

This relies on the rq->__lock order being in increasing CPU number
(inside a core). It is also constrained to SMT8 per lockdep (and
SMT256 per preempt_count).

Luckily SMT8 is the max supported SMT count for Linux (Mips, Sparc and
Power are known to have this).

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Tested-by: Don Hiatt <dhiatt@digitalocean.com>
Tested-by: Hongyu Ning <hongyu.ning@linux.intel.com>
Tested-by: Vincent Guittot <vincent.guittot@linaro.org>
Link: https://lkml.kernel.org/r/YJUNfzSgptjX7tG6@hirez.programming.kicks-ass.net
kernel/Kconfig.preempt
kernel/sched/core.c
kernel/sched/sched.h