From ec84b27f9b3b569f9235413d1945a2006b97b0aa Mon Sep 17 00:00:00 2001 From: Anna-Maria Gleixner Date: Fri, 25 May 2018 11:05:06 +0200 Subject: [PATCH] rcu: Update documentation of rcu_read_unlock() Since commit b4abf91047cf ("rtmutex: Make wait_lock irq safe") the explanation in rcu_read_unlock() documentation about irq unsafe rtmutex wait_lock is no longer valid. Remove it to prevent kernel developers reading the documentation to rely on it. Suggested-by: Eric W. Biederman Signed-off-by: Anna-Maria Gleixner Signed-off-by: Thomas Gleixner Reviewed-by: Paul E. McKenney Acked-by: "Eric W. Biederman" Cc: bigeasy@linutronix.de Link: https://lkml.kernel.org/r/20180525090507.22248-2-anna-maria@linutronix.de --- include/linux/rcupdate.h | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h index e679b17..65163aa 100644 --- a/include/linux/rcupdate.h +++ b/include/linux/rcupdate.h @@ -652,9 +652,7 @@ static inline void rcu_read_lock(void) * Unfortunately, this function acquires the scheduler's runqueue and * priority-inheritance spinlocks. This means that deadlock could result * if the caller of rcu_read_unlock() already holds one of these locks or - * any lock that is ever acquired while holding them; or any lock which - * can be taken from interrupt context because rcu_boost()->rt_mutex_lock() - * does not disable irqs while taking ->wait_lock. + * any lock that is ever acquired while holding them. * * That said, RCU readers are never priority boosted unless they were * preempted. Therefore, one way to avoid deadlock is to make sure -- 2.7.4