From e708d7ad80737496870fd0b6794704d063fb0cdc Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Mon, 4 Aug 2014 15:31:08 +0200 Subject: [PATCH] perf: Do poll_wait() before checking condition in perf_poll() One should first enqueue to the waitqueue and then check for the condition. If the condition gets true after mutex_unlock() but before poll_wait() then we lose it and would have wait for another wakeup. This has been like this since v2.6.31-rc1 commit c7138f37f9 ("perf_counter: fix perf_poll()"). Before that it was slightly worse. I guess we get enough wakeups so if we miss here one it doesn't really matter. It is still a bad example. Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Peter Zijlstra Link: http://lkml.kernel.org/r/1407159068-1478-1-git-send-email-bigeasy@linutronix.de Cc: Arnaldo Carvalho de Melo Cc: Linus Torvalds Signed-off-by: Ingo Molnar --- kernel/events/core.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/kernel/events/core.c b/kernel/events/core.c index a254605..2d7363a 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -3629,6 +3629,7 @@ static unsigned int perf_poll(struct file *file, poll_table *wait) struct ring_buffer *rb; unsigned int events = POLL_HUP; + poll_wait(file, &event->waitq, wait); /* * Pin the event->rb by taking event->mmap_mutex; otherwise * perf_event_set_output() can swizzle our rb and make us miss wakeups. @@ -3638,9 +3639,6 @@ static unsigned int perf_poll(struct file *file, poll_table *wait) if (rb) events = atomic_xchg(&rb->poll, 0); mutex_unlock(&event->mmap_mutex); - - poll_wait(file, &event->waitq, wait); - return events; } -- 2.7.4