KVM: VMX: optimize #PF injection when MAXPHYADDR does not match
authorPaolo Bonzini <pbonzini@redhat.com>
Fri, 10 Jul 2020 15:48:10 +0000 (17:48 +0200)
committerPaolo Bonzini <pbonzini@redhat.com>
Fri, 10 Jul 2020 21:01:52 +0000 (17:01 -0400)
Ignore non-present page faults, since those cannot have reserved
bits set.

When running access.flat with "-cpu Haswell,phys-bits=36", the
number of trapped page faults goes down from 8872644 to 3978948.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20200710154811.418214-9-mgamal@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
arch/x86/kvm/vmx/vmx.c

index 5518f75c9b195e5a2e086140b3c71888a9f181c9..962a78c7dde5818246db943da24fc755a189e4e5 100644 (file)
@@ -4355,6 +4355,16 @@ static void init_vmcs(struct vcpu_vmx *vmx)
                vmx->pt_desc.guest.output_mask = 0x7F;
                vmcs_write64(GUEST_IA32_RTIT_CTL, 0);
        }
+
+       /*
+        * If EPT is enabled, #PF is only trapped if MAXPHYADDR is mismatched
+        * between guest and host.  In that case we only care about present
+        * faults.
+        */
+       if (enable_ept) {
+               vmcs_write32(PAGE_FAULT_ERROR_CODE_MASK, PFERR_PRESENT_MASK);
+               vmcs_write32(PAGE_FAULT_ERROR_CODE_MATCH, PFERR_PRESENT_MASK);
+       }
 }
 
 static void vmx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)