From: Ben Gardon Date: Tue, 2 Feb 2021 18:57:13 +0000 (-0800) Subject: sched: Add needbreak for rwlocks X-Git-Tag: accepted/tizen/unified/20230118.172025~7642^2~138 X-Git-Url: http://review.tizen.org/git/?a=commitdiff_plain;h=a09a689a534183c48f200bc2de1ae61ae9c462ad;p=platform%2Fkernel%2Flinux-rpi.git sched: Add needbreak for rwlocks Contention awareness while holding a spin lock is essential for reducing latency when long running kernel operations can hold that lock. Add the same contention detection interface for read/write spin locks. CC: Ingo Molnar CC: Will Deacon Acked-by: Peter Zijlstra Acked-by: Davidlohr Bueso Acked-by: Waiman Long Acked-by: Paolo Bonzini Signed-off-by: Ben Gardon Message-Id: <20210202185734.1680553-8-bgardon@google.com> Signed-off-by: Paolo Bonzini --- diff --git a/include/linux/sched.h b/include/linux/sched.h index 6e3a5ee..5d1378e5 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1912,6 +1912,23 @@ static inline int spin_needbreak(spinlock_t *lock) #endif } +/* + * Check if a rwlock is contended. + * Returns non-zero if there is another task waiting on the rwlock. + * Returns zero if the lock is not contended or the system / underlying + * rwlock implementation does not support contention detection. + * Technically does not depend on CONFIG_PREEMPTION, but a general need + * for low latency. + */ +static inline int rwlock_needbreak(rwlock_t *lock) +{ +#ifdef CONFIG_PREEMPTION + return rwlock_is_contended(lock); +#else + return 0; +#endif +} + static __always_inline bool need_resched(void) { return unlikely(tif_need_resched());