KVM: x86/pmu: Prevent zero period event from being repeatedly released
authorLike Xu <likexu@tencent.com>
Wed, 7 Dec 2022 07:15:05 +0000 (15:15 +0800)
committerPaolo Bonzini <pbonzini@redhat.com>
Fri, 23 Dec 2022 17:06:45 +0000 (12:06 -0500)
The current vPMU can reuse the same pmc->perf_event for the same
hardware event via pmc_pause/resume_counter(), but this optimization
does not apply to a portion of the TSX events (e.g., "event=0x3c,in_tx=1,
in_tx_cp=1"), where event->attr.sample_period is legally zero at creation,
thus making the perf call to perf_event_period() meaningless (no need to
adjust sample period in this case), and instead causing such reusable
perf_events to be repeatedly released and created.

Avoid releasing zero sample_period events by checking is_sampling_event()
to follow the previously enable/disable optimization.

Signed-off-by: Like Xu <likexu@tencent.com>
Message-Id: <20221207071506.15733-2-likexu@tencent.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
arch/x86/kvm/pmu.c
arch/x86/kvm/pmu.h

index 684393c..eb59462 100644 (file)
@@ -238,7 +238,8 @@ static bool pmc_resume_counter(struct kvm_pmc *pmc)
                return false;
 
        /* recalibrate sample period and check if it's accepted by perf core */
-       if (perf_event_period(pmc->perf_event,
+       if (is_sampling_event(pmc->perf_event) &&
+           perf_event_period(pmc->perf_event,
                              get_sample_period(pmc, pmc->counter)))
                return false;
 
index 85ff3c0..cdb9100 100644 (file)
@@ -140,7 +140,8 @@ static inline u64 get_sample_period(struct kvm_pmc *pmc, u64 counter_value)
 
 static inline void pmc_update_sample_period(struct kvm_pmc *pmc)
 {
-       if (!pmc->perf_event || pmc->is_paused)
+       if (!pmc->perf_event || pmc->is_paused ||
+           !is_sampling_event(pmc->perf_event))
                return;
 
        perf_event_period(pmc->perf_event,