perf/core: Explain perf_sched_mutex
authorAlexander Shishkin <alexander.shishkin@linux.intel.com>
Tue, 29 Aug 2017 14:01:03 +0000 (17:01 +0300)
committerIngo Molnar <mingo@kernel.org>
Fri, 29 Sep 2017 11:28:30 +0000 (13:28 +0200)
To clarify why atomic_inc_return(&perf_sched_events) is not sufficient and
a mutex is needed to order static branch enabling vs the atomic counter
increment, this adds a comment with a short explanation.

Signed-off-by: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20170829140103.6563-1-alexander.shishkin@linux.intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
kernel/events/core.c

index 6bc21e2..5ee6271 100644 (file)
@@ -9394,6 +9394,11 @@ static void account_event(struct perf_event *event)
                inc = true;
 
        if (inc) {
+               /*
+                * We need the mutex here because static_branch_enable()
+                * must complete *before* the perf_sched_count increment
+                * becomes visible.
+                */
                if (atomic_inc_not_zero(&perf_sched_count))
                        goto enabled;