KVM: x86/mmu: Assert on @mmu in the __kvm_mmu_invalidate_addr()
authorLike Xu <likexu@tencent.com>
Tue, 23 May 2023 03:29:47 +0000 (11:29 +0800)
committerSean Christopherson <seanjc@google.com>
Fri, 26 May 2023 18:24:52 +0000 (11:24 -0700)
Add assertion to track that "mmu == vcpu->arch.mmu" is always true in the
context of __kvm_mmu_invalidate_addr(). for_each_shadow_entry_using_root()
and kvm_sync_spte() operate on vcpu->arch.mmu, but the only reason that
doesn't cause explosions is because handle_invept() frees roots instead of
doing a manual invalidation.  As of now, there are no major roadblocks
to switching INVEPT emulation over to use kvm_mmu_invalidate_addr().

Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Like Xu <likexu@tencent.com>
Link: https://lore.kernel.org/r/20230523032947.60041-1-likexu@tencent.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
arch/x86/kvm/mmu/mmu.c

index c8961f4..258f122 100644 (file)
@@ -5797,6 +5797,14 @@ static void __kvm_mmu_invalidate_addr(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu
 
        vcpu_clear_mmio_info(vcpu, addr);
 
+       /*
+        * Walking and synchronizing SPTEs both assume they are operating in
+        * the context of the current MMU, and would need to be reworked if
+        * this is ever used to sync the guest_mmu, e.g. to emulate INVEPT.
+        */
+       if (WARN_ON_ONCE(mmu != vcpu->arch.mmu))
+               return;
+
        if (!VALID_PAGE(root_hpa))
                return;