KVM: x86/mmu: Optimize MMU page cache lookup for fully direct MMUs
authorSean Christopherson <sean.j.christopherson@intel.com>
Tue, 23 Jun 2020 19:40:27 +0000 (12:40 -0700)
committerPaolo Bonzini <pbonzini@redhat.com>
Wed, 8 Jul 2020 20:21:50 +0000 (16:21 -0400)
commitfb58a9c345f645f1774dcf6a36fda169253008ae
tree5cc9719d7359b0e7095d77f4fc6a978f79a6000a
parentac101b7cb17d4a5df1ab735420d0ee3593465dcf
KVM: x86/mmu: Optimize MMU page cache lookup for fully direct MMUs

Skip the unsync checks and the write flooding clearing for fully direct
MMUs, which are guaranteed to not have unsync'd or indirect pages (write
flooding detection only applies to indirect pages).  For TDP, this
avoids unnecessary memory reads and writes, and for the write flooding
count will also avoid dirtying a cache line (unsync_child_bitmap itself
consumes a cache line, i.e. write_flooding_count is guaranteed to be in
a different cache line than parent_ptes).

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Message-Id: <20200623194027.23135-3-sean.j.christopherson@intel.com>
Reviewed-By: Jon Cargille <jcargill@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
arch/x86/kvm/mmu/mmu.c