KVM: x86/mmu: Commit zap of remaining invalid pages when recovering lpages
authorSean Christopherson <sean.j.christopherson@intel.com>
Wed, 23 Sep 2020 18:37:28 +0000 (11:37 -0700)
committerPaolo Bonzini <pbonzini@redhat.com>
Mon, 28 Sep 2020 11:57:39 +0000 (07:57 -0400)
Call kvm_mmu_commit_zap_page() after exiting the "prepare zap" loop in
kvm_recover_nx_lpages() to finish zapping pages in the unlikely event
that the loop exited due to lpage_disallowed_mmu_pages being empty.
Because the recovery thread drops mmu_lock() when rescheduling, it's
possible that lpage_disallowed_mmu_pages could be emptied by a different
thread without to_zap reaching zero despite to_zap being derived from
the number of disallowed lpages.

Fixes: 1aa9b9572b105 ("kvm: x86: mmu: Recovery of shattered NX large pages")
Cc: Junaid Shahid <junaids@google.com>
Cc: stable@vger.kernel.org
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Message-Id: <20200923183735.584-2-sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
arch/x86/kvm/mmu/mmu.c

index 7ec3d05..340eacf 100644 (file)
@@ -6394,6 +6394,7 @@ static void kvm_recover_nx_lpages(struct kvm *kvm)
                                cond_resched_lock(&kvm->mmu_lock);
                }
        }
+       kvm_mmu_commit_zap_page(kvm, &invalid_list);
 
        spin_unlock(&kvm->mmu_lock);
        srcu_read_unlock(&kvm->srcu, rcu_idx);