sched: Replace spin_unlock_wait() with lock/unlock pair
authorPaul E. McKenney <paulmck@linux.vnet.ibm.com>
Thu, 29 Jun 2017 19:08:26 +0000 (12:08 -0700)
committerPaul E. McKenney <paulmck@linux.vnet.ibm.com>
Fri, 11 Aug 2017 20:09:14 +0000 (13:09 -0700)
There is no agreed-upon definition of spin_unlock_wait()'s semantics,
and it appears that all callers could do just as well with a lock/unlock
pair.  This commit therefore replaces the spin_unlock_wait() call in
do_task_dead() with spin_lock() followed immediately by spin_unlock().
This should be safe from a performance perspective because the lock is
this tasks ->pi_lock, and this is called only after the task exits.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Alan Stern <stern@rowland.harvard.edu>
Cc: Andrea Parri <parri.andrea@gmail.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
[ paulmck: Drop smp_mb() based on Peter Zijlstra's analysis:
  http://lkml.kernel.org/r/20170811144150.26gowhxte7ri5fpk@hirez.programming.kicks-ass.net ]

kernel/sched/core.c

index 17c667b..5d22323 100644 (file)
@@ -3352,8 +3352,8 @@ void __noreturn do_task_dead(void)
         * To avoid it, we have to wait for releasing tsk->pi_lock which
         * is held by try_to_wake_up()
         */
-       smp_mb();
-       raw_spin_unlock_wait(&current->pi_lock);
+       raw_spin_lock_irq(&current->pi_lock);
+       raw_spin_unlock_irq(&current->pi_lock);
 
        /* Causes final put_task_struct in finish_task_switch(): */
        __set_current_state(TASK_DEAD);