KVM: x86: Optimize kvm->lock and SRCU interaction (KVM_SET_PMU_EVENT_FILTER)
authorMichal Luczaj <mhal@rbox.co>
Sat, 7 Jan 2023 00:12:51 +0000 (01:12 +0100)
committerSean Christopherson <seanjc@google.com>
Fri, 3 Feb 2023 23:19:22 +0000 (15:19 -0800)
Reduce time spent holding kvm->lock: unlock mutex before calling
synchronize_srcu_expedited().  There is no need to hold kvm->lock until
all vCPUs have been kicked, KVM only needs to guarantee that all vCPUs
will switch to the new filter before exiting to userspace.  Protecting
the write to __reprogram_pmi is also unnecessary as a vCPU may process
a set bit before receiving the final KVM_REQ_PMU, but the per-vCPU writes
are guaranteed to occur after all vCPUs have switched to the new filter.

Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Michal Luczaj <mhal@rbox.co>
Link: https://lore.kernel.org/r/20230107001256.2365304-2-mhal@rbox.co
[sean: expand changelog]
Signed-off-by: Sean Christopherson <seanjc@google.com>
arch/x86/kvm/pmu.c

index d939d3b..58e5a45 100644 (file)
@@ -634,6 +634,7 @@ int kvm_vm_ioctl_set_pmu_event_filter(struct kvm *kvm, void __user *argp)
        mutex_lock(&kvm->lock);
        filter = rcu_replace_pointer(kvm->arch.pmu_event_filter, filter,
                                     mutex_is_locked(&kvm->lock));
+       mutex_unlock(&kvm->lock);
        synchronize_srcu_expedited(&kvm->srcu);
 
        BUILD_BUG_ON(sizeof(((struct kvm_pmu *)0)->reprogram_pmi) >
@@ -644,8 +645,6 @@ int kvm_vm_ioctl_set_pmu_event_filter(struct kvm *kvm, void __user *argp)
 
        kvm_make_all_cpus_request(kvm, KVM_REQ_PMU);
 
-       mutex_unlock(&kvm->lock);
-
        r = 0;
 cleanup:
        kfree(filter);