sched: Clean up the might_sleep() underscore zoo
authorThomas Gleixner <tglx@linutronix.de>
Thu, 23 Sep 2021 16:54:35 +0000 (18:54 +0200)
committerPeter Zijlstra <peterz@infradead.org>
Fri, 1 Oct 2021 11:57:49 +0000 (13:57 +0200)
commit874f670e6088d3bff3972ecd44c1cb00610f9183
treeec4a3568328504f71947fdb4aa27133ed6e4a675
parent1415b49bcd321bca7347f43f8b269c91ec46d1dc
sched: Clean up the might_sleep() underscore zoo

__might_sleep() vs. ___might_sleep() is hard to distinguish. Aside of that
the three underscore variant is exposed to provide a checkpoint for
rescheduling points which are distinct from blocking points.

They are semantically a preemption point which means that scheduling is
state preserving. A real blocking operation, e.g. mutex_lock(), wait*(),
which cannot preserve a task state which is not equal to RUNNING.

While technically blocking on a "sleeping" spinlock in RT enabled kernels
falls into the voluntary scheduling category because it has to wait until
the contended spin/rw lock becomes available, the RT lock substitution code
can semantically be mapped to a voluntary preemption because the RT lock
substitution code and the scheduler are providing mechanisms to preserve
the task state and to take regular non-lock related wakeups into account.

Rename ___might_sleep() to __might_resched() to make the distinction of
these functions clear.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20210923165357.928693482@linutronix.de
include/linux/kernel.h
include/linux/sched.h
kernel/locking/spinlock_rt.c
kernel/sched/core.c