KVM: x86/mmu: Move fast_page_fault() call above mmu_topup_memory_caches()
authorSean Christopherson <sean.j.christopherson@intel.com>
Fri, 3 Jul 2020 02:35:30 +0000 (19:35 -0700)
committerPaolo Bonzini <pbonzini@redhat.com>
Thu, 9 Jul 2020 17:29:39 +0000 (13:29 -0400)
Avoid refilling the memory caches and potentially slow reclaim/swap when
handling a fast page fault, which does not need to allocate any new
objects.

Reviewed-by: Ben Gardon <bgardon@google.com>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Message-Id: <20200703023545.8771-7-sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
arch/x86/kvm/mmu/mmu.c

index a8eddb8..d851f8c 100644 (file)
@@ -4120,6 +4120,9 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code,
        if (page_fault_handle_page_track(vcpu, error_code, gfn))
                return RET_PF_EMULATE;
 
+       if (fast_page_fault(vcpu, gpa, error_code))
+               return RET_PF_RETRY;
+
        r = mmu_topup_memory_caches(vcpu);
        if (r)
                return r;
@@ -4127,9 +4130,6 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code,
        if (lpage_disallowed)
                max_level = PG_LEVEL_4K;
 
-       if (fast_page_fault(vcpu, gpa, error_code))
-               return RET_PF_RETRY;
-
        mmu_seq = vcpu->kvm->mmu_notifier_seq;
        smp_rmb();