sched: Make cond_resched_*lock() variants consistent vs. might_sleep()
authorThomas Gleixner <tglx@linutronix.de>
Thu, 23 Sep 2021 16:54:37 +0000 (18:54 +0200)
committerPeter Zijlstra <peterz@infradead.org>
Fri, 1 Oct 2021 11:57:50 +0000 (13:57 +0200)
Commit 3427445afd26 ("sched: Exclude cond_resched() from nested sleep
test") removed the task state check of __might_sleep() for
cond_resched_lock() because cond_resched_lock() is not a voluntary
scheduling point which blocks. It's a preemption point which requires the
lock holder to release the spin lock.

The same rationale applies to cond_resched_rwlock_read/write(), but those
were not touched.

Make it consistent and use the non-state checking __might_resched() there
as well.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20210923165357.991262778@linutronix.de
include/linux/sched.h

index b38f002..7a989f2 100644 (file)
@@ -2051,14 +2051,14 @@ extern int __cond_resched_rwlock_write(rwlock_t *lock);
        __cond_resched_lock(lock);                                      \
 })
 
-#define cond_resched_rwlock_read(lock) ({                      \
-       __might_sleep(__FILE__, __LINE__, PREEMPT_LOCK_OFFSET); \
-       __cond_resched_rwlock_read(lock);                       \
+#define cond_resched_rwlock_read(lock) ({                              \
+       __might_resched(__FILE__, __LINE__, PREEMPT_LOCK_OFFSET);       \
+       __cond_resched_rwlock_read(lock);                               \
 })
 
-#define cond_resched_rwlock_write(lock) ({                     \
-       __might_sleep(__FILE__, __LINE__, PREEMPT_LOCK_OFFSET); \
-       __cond_resched_rwlock_write(lock);                      \
+#define cond_resched_rwlock_write(lock) ({                             \
+       __might_resched(__FILE__, __LINE__, PREEMPT_LOCK_OFFSET);       \
+       __cond_resched_rwlock_write(lock);                              \
 })
 
 static inline void cond_resched_rcu(void)