KVM: x86/mmu: Add comment on try_cmpxchg64 usage in tdp_mmu_set_spte_atomic
authorUros Bizjak <ubizjak@gmail.com>
Tue, 25 Apr 2023 11:39:32 +0000 (13:39 +0200)
committerSean Christopherson <seanjc@google.com>
Fri, 26 May 2023 18:24:52 +0000 (11:24 -0700)
Commit aee98a6838d5 ("KVM: x86/mmu: Use try_cmpxchg64 in
tdp_mmu_set_spte_atomic") removed the comment that iter->old_spte is
updated when different logical CPU modifies the page table entry.
Although this is what try_cmpxchg does implicitly, it won't hurt
if this fact is explicitly mentioned in a restored comment.

Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Sean Christopherson <seanjc@google.com>
Cc: David Matlack <dmatlack@google.com>
Signed-off-by: Uros Bizjak <ubizjak@gmail.com>
Link: https://lore.kernel.org/r/20230425113932.3148-1-ubizjak@gmail.com
[sean: extend comment above try_cmpxchg64()]
Signed-off-by: Sean Christopherson <seanjc@google.com>
arch/x86/kvm/mmu/tdp_mmu.c

index 0834021..512163d 100644 (file)
@@ -592,7 +592,10 @@ static inline int tdp_mmu_set_spte_atomic(struct kvm *kvm,
 
        /*
         * Note, fast_pf_fix_direct_spte() can also modify TDP MMU SPTEs and
-        * does not hold the mmu_lock.
+        * does not hold the mmu_lock.  On failure, i.e. if a different logical
+        * CPU modified the SPTE, try_cmpxchg64() updates iter->old_spte with
+        * the current value, so the caller operates on fresh data, e.g. if it
+        * retries tdp_mmu_set_spte_atomic()
         */
        if (!try_cmpxchg64(sptep, &iter->old_spte, new_spte))
                return -EBUSY;