platform/kernel/linux-starfive.git
17 months agoMerge tag 'kvm-x86-selftests-6.3' of https://github.com/kvm-x86/linux into HEAD
Paolo Bonzini [Wed, 15 Feb 2023 13:34:32 +0000 (08:34 -0500)]
Merge tag 'kvm-x86-selftests-6.3' of https://github.com/kvm-x86/linux into HEAD

KVM selftests changes for 6.3:

 - Cache the CPU vendor (AMD vs. Intel) and use the info to emit the correct
   hypercall instruction instead of relying on KVM to patch in VMMCALL

 - A variety of one-off cleanups and fixes

17 months agoMerge tag 'kvm-x86-pmu-6.3' of https://github.com/kvm-x86/linux into HEAD
Paolo Bonzini [Wed, 15 Feb 2023 13:23:24 +0000 (08:23 -0500)]
Merge tag 'kvm-x86-pmu-6.3' of https://github.com/kvm-x86/linux into HEAD

KVM x86 PMU changes for 6.3:

 - Add support for created masked events for the PMU filter to allow
   userspace to heavily restrict what events the guest can use without
   needing to create an absurd number of events

 - Clean up KVM's handling of "PMU MSRs to save", especially when vPMU
   support is disabled

 - Add PEBS support for Intel SPR

17 months agoMerge tag 'kvm-x86-mmu-6.3' of https://github.com/kvm-x86/linux into HEAD
Paolo Bonzini [Wed, 15 Feb 2023 13:22:44 +0000 (08:22 -0500)]
Merge tag 'kvm-x86-mmu-6.3' of https://github.com/kvm-x86/linux into HEAD

KVM x86 MMU changes for 6.3:

 - Fix and cleanup the range-based TLB flushing code, used when KVM is
   running on Hyper-V

 - A few one-off cleanups

17 months agoMerge tag 'kvm-x86-misc-6.3' of https://github.com/kvm-x86/linux into HEAD
Paolo Bonzini [Wed, 15 Feb 2023 13:22:09 +0000 (08:22 -0500)]
Merge tag 'kvm-x86-misc-6.3' of https://github.com/kvm-x86/linux into HEAD

KVM x86 changes for 6.3:

 - Advertise support for Intel's fancy new fast REP string features

 - Fix a double-shootdown issue in the emergency reboot code

 - Ensure GIF=1 and disable SVM during an emergency reboot, i.e. give SVM
   similar treatment to VMX

 - Update Xen's TSC info CPUID sub-leaves as appropriate

 - Add support for Hyper-V's extended hypercalls, where "support" at this
   point is just forwarding the hypercalls to userspace

 - Clean up the kvm->lock vs. kvm->srcu sequences when updating the PMU and
   MSR filters

 - One-off fixes and cleanups

17 months agoMerge tag 'kvm-x86-generic-6.3' of https://github.com/kvm-x86/linux into HEAD
Paolo Bonzini [Wed, 15 Feb 2023 12:30:43 +0000 (07:30 -0500)]
Merge tag 'kvm-x86-generic-6.3' of https://github.com/kvm-x86/linux into HEAD

Common KVM changes for 6.3:

 - Account allocations in generic kvm_arch_alloc_vm()

 - Fix a typo and a stale comment

 - Fix a memory leak if coalesced MMIO unregistration fails

18 months agoKVM: selftests: Remove duplicate macro definition
Shaoqin Huang [Wed, 8 Feb 2023 07:18:00 +0000 (15:18 +0800)]
KVM: selftests: Remove duplicate macro definition

The KVM_GUEST_PAGE_TABLE_MIN_PADDR macro has been defined in
include/kvm_util_base.h. So remove the duplicate definition in
lib/kvm_util.c.

Fixes: cce0c23dd944 ("KVM: selftests: Add wrapper to allocate page table page")
Signed-off-by: Shaoqin Huang <shahuang@redhat.com>
Link: https://lore.kernel.org/r/20230208071801.68620-1-shahuang@redhat.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
18 months agoKVM: selftests: Clean up misnomers in xen_shinfo_test
Michal Luczaj [Mon, 6 Feb 2023 20:24:30 +0000 (21:24 +0100)]
KVM: selftests: Clean up misnomers in xen_shinfo_test

As discussed[*], relabel the poorly named structs to align with the
current KVM nomenclature.

Old names are a leftover from before commit 52491a38b2c2 ("KVM:
Initialize gfn_to_pfn_cache locks in dedicated helper"), which i.a.
introduced kvm_gpc_init() and renamed kvm_gfn_to_pfn_cache_init()/
_destroy() to kvm_gpc_activate()/_deactivate(). Partly in an effort
to avoid implying that the cache really is destroyed/freed.

While at it, get rid of #define GPA_INVALID, which being used as a GFN,
is not only misnamed, but also unnecessarily reinvents a UAPI constant.

No functional change intended.

[*] https://lore.kernel.org/r/Y5yZ6CFkEMBqyJ6v@google.com

Signed-off-by: Michal Luczaj <mhal@rbox.co>
Link: https://lore.kernel.org/r/20230206202430.1898057-1-mhal@rbox.co
Signed-off-by: Sean Christopherson <seanjc@google.com>
18 months agoselftests: KVM: Replace optarg with arg in guest_modes_cmdline
Shaoqin Huang [Thu, 2 Feb 2023 02:57:15 +0000 (10:57 +0800)]
selftests: KVM: Replace optarg with arg in guest_modes_cmdline

The parameter arg in guest_modes_cmdline not being used now, and the
optarg should be replaced with arg in guest_modes_cmdline.

And this is the chance to change strtoul() to atoi_non_negative(), since
guest mode ID will never be negative.

Signed-off-by: Shaoqin Huang <shahuang@redhat.com>
Fixes: e42ac777d661 ("KVM: selftests: Factor out guest mode code")
Reviewed-by: Andrew Jones <andrew.jones@linux.dev>
Reviewed-by: Vipin Sharma <vipinsh@google.com>
Link: https://lore.kernel.org/r/20230202025716.216323-1-shahuang@redhat.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
18 months agoKVM: update code comment in struct kvm_vcpu
Wang Yong [Thu, 2 Feb 2023 08:13:42 +0000 (08:13 +0000)]
KVM: update code comment in struct kvm_vcpu

Commit c5b077549136 ("KVM: Convert the kvm->vcpus array to a xarray")
changed kvm->vcpus array to a xarray, so update the code comment of
kvm_vcpu->vcpu_idx accordingly.

Signed-off-by: Wang Yong <yongw.kernel@gmail.com>
Link: https://lore.kernel.org/r/20230202081342.856687-1-yongw.kernel@gmail.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
18 months agoKVM: selftests: Assign guest page size in sync area early in memslot_perf_test
Gavin Shan [Wed, 18 Jan 2023 09:21:33 +0000 (17:21 +0800)]
KVM: selftests: Assign guest page size in sync area early in memslot_perf_test

The guest page size in the synchronization area is needed by all test
cases. So it's reasonable to set it in the unified preparation function
(prepare_vm()).

Signed-off-by: Gavin Shan <gshan@redhat.com>
Reviewed-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
Link: https://lore.kernel.org/r/20230118092133.320003-3-gshan@redhat.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
18 months agoKVM: selftests: Remove duplicate VM creation in memslot_perf_test
Gavin Shan [Wed, 18 Jan 2023 09:21:32 +0000 (17:21 +0800)]
KVM: selftests: Remove duplicate VM creation in memslot_perf_test

Remove a spurious call to __vm_create_with_one_vcpu() that was introduced
by a merge gone sideways.

Fixes: eb5618911af0 ("Merge tag 'kvmarm-6.2' of https://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm into HEAD")
Signed-off-by: Gavin Shan <gshan@redhat.com>
Reviewed-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
Link: https://lore.kernel.org/r/20230118092133.320003-2-gshan@redhat.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
18 months agoKVM: x86: Simplify msr_io()
Michal Luczaj [Sat, 7 Jan 2023 00:12:56 +0000 (01:12 +0100)]
KVM: x86: Simplify msr_io()

As of commit bccf2150fe62 ("KVM: Per-vcpu inodes"), __msr_io() doesn't
return a negative value. Remove unnecessary checks.

Signed-off-by: Michal Luczaj <mhal@rbox.co>
Link: https://lore.kernel.org/r/20230107001256.2365304-7-mhal@rbox.co
[sean: call out commit which left behind the unnecessary check]
Signed-off-by: Sean Christopherson <seanjc@google.com>
18 months agoKVM: x86: Remove unnecessary initialization in kvm_vm_ioctl_set_msr_filter()
Michal Luczaj [Sat, 7 Jan 2023 00:12:55 +0000 (01:12 +0100)]
KVM: x86: Remove unnecessary initialization in kvm_vm_ioctl_set_msr_filter()

Do not initialize the value of `r`, as it will be overwritten.

Signed-off-by: Michal Luczaj <mhal@rbox.co>
Link: https://lore.kernel.org/r/20230107001256.2365304-6-mhal@rbox.co
Signed-off-by: Sean Christopherson <seanjc@google.com>
18 months agoKVM: x86: Explicitly state lockdep condition of msr_filter update
Michal Luczaj [Sat, 7 Jan 2023 00:12:54 +0000 (01:12 +0100)]
KVM: x86: Explicitly state lockdep condition of msr_filter update

Replace `1` with the actual mutex_is_locked() check.

Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Michal Luczaj <mhal@rbox.co>
Link: https://lore.kernel.org/r/20230107001256.2365304-5-mhal@rbox.co
[sean: delete the comment that explained the hardocded '1']
Signed-off-by: Sean Christopherson <seanjc@google.com>
18 months agoKVM: x86: Simplify msr_filter update
Michal Luczaj [Sat, 7 Jan 2023 00:12:53 +0000 (01:12 +0100)]
KVM: x86: Simplify msr_filter update

Replace srcu_dereference()+rcu_assign_pointer() sequence with
a single rcu_replace_pointer().

Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Michal Luczaj <mhal@rbox.co>
Link: https://lore.kernel.org/r/20230107001256.2365304-4-mhal@rbox.co
Signed-off-by: Sean Christopherson <seanjc@google.com>
18 months agoKVM: x86: Optimize kvm->lock and SRCU interaction (KVM_X86_SET_MSR_FILTER)
Michal Luczaj [Sat, 7 Jan 2023 00:12:52 +0000 (01:12 +0100)]
KVM: x86: Optimize kvm->lock and SRCU interaction (KVM_X86_SET_MSR_FILTER)

Reduce time spent holding kvm->lock: unlock mutex before calling
synchronize_srcu().  There is no need to hold kvm->lock until all vCPUs
have been kicked, KVM only needs to guarantee that all vCPUs will switch
to the new filter before exiting to userspace.

Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Michal Luczaj <mhal@rbox.co>
Link: https://lore.kernel.org/r/20230107001256.2365304-3-mhal@rbox.co
[sean: expand changelog]
Signed-off-by: Sean Christopherson <seanjc@google.com>
18 months agoKVM: x86: Optimize kvm->lock and SRCU interaction (KVM_SET_PMU_EVENT_FILTER)
Michal Luczaj [Sat, 7 Jan 2023 00:12:51 +0000 (01:12 +0100)]
KVM: x86: Optimize kvm->lock and SRCU interaction (KVM_SET_PMU_EVENT_FILTER)

Reduce time spent holding kvm->lock: unlock mutex before calling
synchronize_srcu_expedited().  There is no need to hold kvm->lock until
all vCPUs have been kicked, KVM only needs to guarantee that all vCPUs
will switch to the new filter before exiting to userspace.  Protecting
the write to __reprogram_pmi is also unnecessary as a vCPU may process
a set bit before receiving the final KVM_REQ_PMU, but the per-vCPU writes
are guaranteed to occur after all vCPUs have switched to the new filter.

Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Michal Luczaj <mhal@rbox.co>
Link: https://lore.kernel.org/r/20230107001256.2365304-2-mhal@rbox.co
[sean: expand changelog]
Signed-off-by: Sean Christopherson <seanjc@google.com>
18 months agoKVM: x86/emulator: Fix comment in __load_segment_descriptor()
Michal Luczaj [Thu, 26 Jan 2023 01:34:04 +0000 (02:34 +0100)]
KVM: x86/emulator: Fix comment in __load_segment_descriptor()

The comment refers to the same condition twice. Make it reflect what the
code actually does.

No functional change intended.

Signed-off-by: Michal Luczaj <mhal@rbox.co>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Link: https://lore.kernel.org/r/20230126013405.2967156-3-mhal@rbox.co
Signed-off-by: Sean Christopherson <seanjc@google.com>
18 months agoKVM: x86/emulator: Fix segment load privilege level validation
Michal Luczaj [Thu, 26 Jan 2023 01:34:03 +0000 (02:34 +0100)]
KVM: x86/emulator: Fix segment load privilege level validation

Intel SDM describes what steps are taken by the CPU to verify if a
memory segment can actually be used at a given privilege level. Loading
DS/ES/FS/GS involves checking segment's type as well as making sure that
neither selector's RPL nor caller's CPL are greater than segment's DPL.

Emulator implements Intel's pseudocode in __load_segment_descriptor(),
even quoting the pseudocode in the comments. Although the pseudocode is
correctly translated, the implementation is incorrect. This is most
likely due to SDM, at the time, being wrong.

Patch fixes emulator's logic and updates the pseudocode in the comment.
Below are historical notes.

Emulator code for handling segment descriptors appears to have been
introduced in March 2010 in commit 38ba30ba51a0 ("KVM: x86 emulator:
Emulate task switch in emulator.c"). Intel SDM Vol 2A: Instruction Set
Reference, A-M (Order Number: 253666-034US, _March 2010_) lists the
steps for loading segment registers in section related to MOV
instruction:

  IF DS, ES, FS, or GS is loaded with non-NULL selector
  THEN
    IF segment selector index is outside descriptor table limits
    or segment is not a data or readable code segment
    or ((segment is a data or nonconforming code segment)
    and (both RPL and CPL > DPL))   <---
      THEN #GP(selector); FI;

This is precisely what __load_segment_descriptor() quotes and
implements. But there's a twist; a few SDM revisions later
(253667-044US), in August 2012, the snippet above becomes:

  IF DS, ES, FS, or GS is loaded with non-NULL selector
  THEN
    IF segment selector index is outside descriptor table limits
    or segment is not a data or readable code segment
    or ((segment is a data or nonconforming code segment)
      [note: missing or superfluous parenthesis?]
    or ((RPL > DPL) and (CPL > DPL))   <---
      THEN #GP(selector); FI;

Many SDMs later (253667-065US), in December 2017, pseudocode reaches
what seems to be its final form:

  IF DS, ES, FS, or GS is loaded with non-NULL selector
  THEN
    IF segment selector index is outside descriptor table limits
    OR segment is not a data or readable code segment
    OR ((segment is a data or nonconforming code segment)
        AND ((RPL > DPL) or (CPL > DPL)))   <---
      THEN #GP(selector); FI;

which also matches the behavior described in AMD's APM, which states that
a #GP occurs if:

  The DS, ES, FS, or GS register was loaded and the segment pointed to
  was a data or non-conforming code segment, but the RPL or CPL was
  greater than the DPL.

Signed-off-by: Michal Luczaj <mhal@rbox.co>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Link: https://lore.kernel.org/r/20230126013405.2967156-2-mhal@rbox.co
[sean: add blurb to changelog calling out AMD agrees]
Signed-off-by: Sean Christopherson <seanjc@google.com>
18 months agoperf/x86/intel: Expose EPT-friendly PEBS for SPR and future models
Like Xu [Wed, 9 Nov 2022 08:28:02 +0000 (16:28 +0800)]
perf/x86/intel: Expose EPT-friendly PEBS for SPR and future models

According to Intel SDM, the EPT-friendly PEBS is supported by all the
platforms after ICX, ADL and the future platforms with PEBS format 5.

Currently the only in-kernel user of this capability is KVM, which has
very limited support for hybrid core pmu, so ADL and its successors do
not currently expose this capability. When both hybrid core and PEBS
format 5 are present, KVM will decide on its own merits.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: linux-perf-users@vger.kernel.org
Suggested-by: Kan Liang <kan.liang@linux.intel.com>
Signed-off-by: Like Xu <likexu@tencent.com>
Reviewed-by: Kan Liang <kan.liang@linux.intel.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/20221109082802.27543-4-likexu@tencent.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
18 months agoKVM: x86/pmu: Add PRIR++ and PDist support for SPR and later models
Like Xu [Wed, 9 Nov 2022 08:28:01 +0000 (16:28 +0800)]
KVM: x86/pmu: Add PRIR++ and PDist support for SPR and later models

The pebs capability on the SPR is basically the same as Ice Lake Server
with the exception of two special facilities that have been enhanced and
require special handling.

Upon triggering a PEBS assist, there will be a finite delay between the
time the counter overflows and when the microcode starts to carry out
its data collection obligations. Even if the delay is constant in core
clock space, it invariably manifest as variable "skids" in instruction
address space.

On the Ice Lake Server, the Precise Distribution of Instructions Retire
(PDIR) facility mitigates the "skid" problem by providing an early
indication of when the counter is about to overflow. On SPR, the PDIR
counter available (Fixed 0) is unchanged, but the capability is enhanced
to Instruction-Accurate PDIR (PDIR++), where PEBS is taken on the
next instruction after the one that caused the overflow.

SPR also introduces a new Precise Distribution (PDist) facility only on
general programmable counter 0. Per Intel SDM, PDist eliminates any
skid or shadowing effects from PEBS. With PDist, the PEBS record will
be generated precisely upon completion of the instruction or operation
that causes the counter to overflow (there is no "wait for next occurrence"
by default).

In terms of KVM handling, when guest accesses those special counters,
the KVM needs to request the same index counters via the perf_event
kernel subsystem to ensure that the guest uses the correct pebs hardware
counter (PRIR++ or PDist). This is mainly achieved by adjusting the
event precise level to the maximum, where the semantics of this magic
number is mainly defined by the internal software context of perf_event
and it's also backwards compatible as part of the user space interface.

Opportunistically, refine confusing comments on TNT+, as the only
ones that currently support pebs_ept are Ice Lake server and SPR (GLC+).

Signed-off-by: Like Xu <likexu@tencent.com>
Link: https://lore.kernel.org/r/20221109082802.27543-3-likexu@tencent.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
18 months agoKVM: selftests: Test Hyper-V extended hypercall exit to userspace
Vipin Sharma [Mon, 12 Dec 2022 18:37:20 +0000 (10:37 -0800)]
KVM: selftests: Test Hyper-V extended hypercall exit to userspace

Hyper-V extended hypercalls by default exit to userspace. Verify
userspace gets the call, update the result and then verify in guest
correct result is received.

Add KVM_EXIT_HYPERV to list of "known" hypercalls so errors generate
pretty strings.

Signed-off-by: Vipin Sharma <vipinsh@google.com>
Reviewed-by: David Matlack <dmatlack@google.com>
Link: https://lore.kernel.org/r/20221212183720.4062037-14-vipinsh@google.com
[sean: add KVM_EXIT_HYPERV to exit_reasons_known]
Signed-off-by: Sean Christopherson <seanjc@google.com>
18 months agoKVM: selftests: Replace hardcoded Linux OS id with HYPERV_LINUX_OS_ID
Vipin Sharma [Mon, 12 Dec 2022 18:37:18 +0000 (10:37 -0800)]
KVM: selftests: Replace hardcoded Linux OS id with HYPERV_LINUX_OS_ID

Use HYPERV_LINUX_OS_ID macro instead of hardcoded 0x8100 << 48

Signed-off-by: Vipin Sharma <vipinsh@google.com>
Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Link: https://lore.kernel.org/r/20221212183720.4062037-12-vipinsh@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
18 months agoKVM: selftests: Test Hyper-V extended hypercall enablement
Vipin Sharma [Mon, 12 Dec 2022 18:37:17 +0000 (10:37 -0800)]
KVM: selftests: Test Hyper-V extended hypercall enablement

Test Hyper-V extended hypercall, HV_EXT_CALL_QUERY_CAPABILITIES
(0x8001), access denied and invalid parameter cases.

Access is denied if CPUID.0x40000003.EBX BIT(20) is not set.
Invalid parameter if call has fast bit set.

Signed-off-by: Vipin Sharma <vipinsh@google.com>
Link: https://lore.kernel.org/r/20221212183720.4062037-11-vipinsh@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
18 months agoKVM: x86: hyper-v: Add extended hypercall support in Hyper-v
Vipin Sharma [Mon, 12 Dec 2022 18:37:16 +0000 (10:37 -0800)]
KVM: x86: hyper-v: Add extended hypercall support in Hyper-v

Add support for extended hypercall in Hyper-v. Hyper-v TLFS 6.0b
describes hypercalls above call code 0x8000 as extended hypercalls.

A Hyper-v hypervisor's guest VM finds availability of extended
hypercalls via CPUID.0x40000003.EBX BIT(20). If the bit is set then the
guest can call extended hypercalls.

All extended hypercalls will exit to userspace by default. This allows
for easy support of future hypercalls without being dependent on KVM
releases.

If there will be need to process the hypercall in KVM instead of
userspace then KVM can create a capability which userspace can query to
know which hypercalls can be handled by the KVM and enable handling
of those hypercalls.

Signed-off-by: Vipin Sharma <vipinsh@google.com>
Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Link: https://lore.kernel.org/r/20221212183720.4062037-10-vipinsh@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
18 months agoKVM: x86: hyper-v: Use common code for hypercall userspace exit
Vipin Sharma [Mon, 12 Dec 2022 18:37:15 +0000 (10:37 -0800)]
KVM: x86: hyper-v: Use common code for hypercall userspace exit

Remove duplicate code to exit to userspace for hyper-v hypercalls and
use a common place to exit.

No functional change intended.

Signed-off-by: Vipin Sharma <vipinsh@google.com>
Suggested-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Link: https://lore.kernel.org/r/20221212183720.4062037-9-vipinsh@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
18 months agoKVM: Destroy target device if coalesced MMIO unregistration fails
Sean Christopherson [Mon, 19 Dec 2022 17:19:24 +0000 (17:19 +0000)]
KVM: Destroy target device if coalesced MMIO unregistration fails

Destroy and free the target coalesced MMIO device if unregistering said
device fails.  As clearly noted in the code, kvm_io_bus_unregister_dev()
does not destroy the target device.

  BUG: memory leak
  unreferenced object 0xffff888112a54880 (size 64):
    comm "syz-executor.2", pid 5258, jiffies 4297861402 (age 14.129s)
    hex dump (first 32 bytes):
      38 c7 67 15 00 c9 ff ff 38 c7 67 15 00 c9 ff ff  8.g.....8.g.....
      e0 c7 e1 83 ff ff ff ff 00 30 67 15 00 c9 ff ff  .........0g.....
    backtrace:
      [<0000000006995a8a>] kmalloc include/linux/slab.h:556 [inline]
      [<0000000006995a8a>] kzalloc include/linux/slab.h:690 [inline]
      [<0000000006995a8a>] kvm_vm_ioctl_register_coalesced_mmio+0x8e/0x3d0 arch/x86/kvm/../../../virt/kvm/coalesced_mmio.c:150
      [<00000000022550c2>] kvm_vm_ioctl+0x47d/0x1600 arch/x86/kvm/../../../virt/kvm/kvm_main.c:3323
      [<000000008a75102f>] vfs_ioctl fs/ioctl.c:46 [inline]
      [<000000008a75102f>] file_ioctl fs/ioctl.c:509 [inline]
      [<000000008a75102f>] do_vfs_ioctl+0xbab/0x1160 fs/ioctl.c:696
      [<0000000080e3f669>] ksys_ioctl+0x76/0xa0 fs/ioctl.c:713
      [<0000000059ef4888>] __do_sys_ioctl fs/ioctl.c:720 [inline]
      [<0000000059ef4888>] __se_sys_ioctl fs/ioctl.c:718 [inline]
      [<0000000059ef4888>] __x64_sys_ioctl+0x6f/0xb0 fs/ioctl.c:718
      [<000000006444fa05>] do_syscall_64+0x9f/0x4e0 arch/x86/entry/common.c:290
      [<000000009a4ed50b>] entry_SYSCALL_64_after_hwframe+0x49/0xbe

  BUG: leak checking failed

Fixes: 5d3c4c79384a ("KVM: Stop looking for coalesced MMIO zones if the bus is destroyed")
Cc: stable@vger.kernel.org
Reported-by: 柳菁峰 <liujingfeng@qianxin.com>
Reported-by: Michal Luczaj <mhal@rbox.co>
Link: https://lore.kernel.org/r/20221219171924.67989-1-seanjc@google.com
Link: https://lore.kernel.org/all/20230118220003.1239032-1-mhal@rbox.co
Signed-off-by: Sean Christopherson <seanjc@google.com>
18 months agoKVM: x86/pmu: Provide "error" semantics for unsupported-but-known PMU MSRs
Sean Christopherson [Tue, 24 Jan 2023 23:49:05 +0000 (23:49 +0000)]
KVM: x86/pmu: Provide "error" semantics for unsupported-but-known PMU MSRs

Provide "error" semantics (read zeros, drop writes) for userspace accesses
to MSRs that are ultimately unsupported for whatever reason, but for which
KVM told userspace to save and restore the MSR, i.e. for MSRs that KVM
included in KVM_GET_MSR_INDEX_LIST.

Previously, KVM special cased a few PMU MSRs that were problematic at one
point or another.  Extend the treatment to all PMU MSRs, e.g. to avoid
spurious unsupported accesses.

Note, the logic can also be used for non-PMU MSRs, but as of today only
PMU MSRs can end up being unsupported after KVM told userspace to save and
restore them.

Link: https://lore.kernel.org/r/20230124234905.3774678-7-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
18 months agoKVM: x86/pmu: Don't tell userspace to save MSRs for non-existent fixed PMCs
Like Xu [Tue, 24 Jan 2023 23:49:04 +0000 (23:49 +0000)]
KVM: x86/pmu: Don't tell userspace to save MSRs for non-existent fixed PMCs

Limit the set of MSRs for fixed PMU counters based on the number of fixed
counters actually supported by the host so that userspace doesn't waste
time saving and restoring dummy values.

Signed-off-by: Like Xu <likexu@tencent.com>
[sean: split for !enable_pmu logic, drop min(), write changelog]
Link: https://lore.kernel.org/r/20230124234905.3774678-6-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
18 months agoKVM: x86/pmu: Don't tell userspace to save PMU MSRs if PMU is disabled
Sean Christopherson [Tue, 24 Jan 2023 23:49:03 +0000 (23:49 +0000)]
KVM: x86/pmu: Don't tell userspace to save PMU MSRs if PMU is disabled

Omit all PMU MSRs from the "MSRs to save" list if the PMU is disabled so
that userspace doesn't waste time saving and restoring dummy values.  KVM
provides "error" semantics (read zeros, drop writes) for such known-but-
unsupported MSRs, i.e. has fudged around this issue for quite some time.
Keep the "error" semantics as-is for now, the logic will be cleaned up in
a separate patch.

Cc: Aaron Lewis <aaronlewis@google.com>
Cc: Weijiang Yang <weijiang.yang@intel.com>
Link: https://lore.kernel.org/r/20230124234905.3774678-5-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
18 months agoKVM: x86/pmu: Use separate array for defining "PMU MSRs to save"
Sean Christopherson [Tue, 24 Jan 2023 23:49:02 +0000 (23:49 +0000)]
KVM: x86/pmu: Use separate array for defining "PMU MSRs to save"

Move all potential to-be-saved PMU MSRs into a separate array so that a
future patch can easily omit all PMU MSRs from the list when the PMU is
disabled.

No functional change intended.

Link: https://lore.kernel.org/r/20230124234905.3774678-4-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
18 months agoKVM: x86/pmu: Gate all "unimplemented MSR" prints on report_ignored_msrs
Sean Christopherson [Tue, 24 Jan 2023 23:49:01 +0000 (23:49 +0000)]
KVM: x86/pmu: Gate all "unimplemented MSR" prints on report_ignored_msrs

Add helpers to print unimplemented MSR accesses and condition all such
prints on report_ignored_msrs, i.e. honor userspace's request to not
print unimplemented MSRs.  Even though vcpu_unimpl() is ratelimited,
printing can still be problematic, e.g. if a print gets stalled when host
userspace is writing MSRs during live migration, an effective stall can
result in very noticeable disruption in the guest.

E.g. the profile below was taken while calling KVM_SET_MSRS on the PMU
counters while the PMU was disabled in KVM.

  -   99.75%     0.00%  [.] __ioctl
   - __ioctl
      - 99.74% entry_SYSCALL_64_after_hwframe
           do_syscall_64
           sys_ioctl
         - do_vfs_ioctl
            - 92.48% kvm_vcpu_ioctl
               - kvm_arch_vcpu_ioctl
                  - 85.12% kvm_set_msr_ignored_check
                       svm_set_msr
                       kvm_set_msr_common
                       printk
                       vprintk_func
                       vprintk_default
                       vprintk_emit
                       console_unlock
                       call_console_drivers
                       univ8250_console_write
                       serial8250_console_write
                       uart_console_write

Reported-by: Aaron Lewis <aaronlewis@google.com>
Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Link: https://lore.kernel.org/r/20230124234905.3774678-3-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
18 months agoKVM: x86/pmu: Cap kvm_pmu_cap.num_counters_gp at KVM's internal max
Sean Christopherson [Tue, 24 Jan 2023 23:49:00 +0000 (23:49 +0000)]
KVM: x86/pmu: Cap kvm_pmu_cap.num_counters_gp at KVM's internal max

Limit kvm_pmu_cap.num_counters_gp during kvm_init_pmu_capability() based
on the vendor PMU capabilities so that consuming num_counters_gp naturally
does the right thing.  This fixes a mostly theoretical bug where KVM could
over-report its PMU support in KVM_GET_SUPPORTED_CPUID for leaf 0xA, e.g.
if the number of counters reported by perf is greater than KVM's
hardcoded internal limit.  Incorporating input from the AMD PMU also
avoids over-reporting MSRs to save when running on AMD.

Link: https://lore.kernel.org/r/20230124234905.3774678-2-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
18 months agoKVM: x86/pmu: Drop event_type and rename "struct kvm_event_hw_type_mapping"
Like Xu [Mon, 5 Dec 2022 12:20:48 +0000 (20:20 +0800)]
KVM: x86/pmu: Drop event_type and rename "struct kvm_event_hw_type_mapping"

After commit ("02791a5c362b KVM: x86/pmu: Use PERF_TYPE_RAW
to merge reprogram_{gp,fixed}counter()"), vPMU starts to directly
use the hardware event eventsel and unit_mask to reprogram perf_event,
and the event_type field in the "struct kvm_event_hw_type_mapping"
is simply no longer being used.

Convert the struct into an anonymous struct as the current name is
obsolete as the structure no longer has any mapping semantics, and
placing the struct definition directly above its sole user makes its
easier to understand what the array is filling in.

Signed-off-by: Like Xu <likexu@tencent.com>
Link: https://lore.kernel.org/r/20221205122048.16023-1-likexu@tencent.com
[sean: drop new comment, use anonymous struct]
Signed-off-by: Sean Christopherson <seanjc@google.com>
18 months agoKVM: x86: Replace IS_ERR() with IS_ERR_VALUE()
ye xingchen [Wed, 16 Nov 2022 09:18:43 +0000 (17:18 +0800)]
KVM: x86: Replace IS_ERR() with IS_ERR_VALUE()

Avoid type casts that are needed for IS_ERR() and use
IS_ERR_VALUE() instead.

Signed-off-by: ye xingchen <ye.xingchen@zte.com.cn>
Link: https://lore.kernel.org/r/202211161718436948912@zte.com.cn
Signed-off-by: Sean Christopherson <seanjc@google.com>
18 months agoKVM: selftests: Stop assuming stats are contiguous in kvm_binary_stats_test
Jing Zhang [Tue, 17 Jan 2023 22:27:07 +0000 (14:27 -0800)]
KVM: selftests: Stop assuming stats are contiguous in kvm_binary_stats_test

Remove the assumption from kvm_binary_stats_test that all stats are
laid out contiguously in memory. The current stats in KVM are
contiguously laid out in memory, but that may change in the future and
the ABI specifically allows holes in the stats data (since each stat
exposes its own offset).

While here drop the check that each stats' offset is less than
size_data, as that is now always true by construction.

Link: https://lore.kernel.org/kvm/20221208193857.4090582-9-dmatlack@google.com/
Fixes: 0b45d58738cd ("KVM: selftests: Add selftest for KVM statistics data binary interface")
Signed-off-by: Jing Zhang <jingzhangos@google.com>
[dmatlack: Re-worded the commit message.]
Signed-off-by: David Matlack <dmatlack@google.com>
Link: https://lore.kernel.org/r/20230117222707.3949974-1-dmatlack@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
18 months agoKVM: x86/xen: Remove unneeded semicolon
zhang songyi [Mon, 19 Dec 2022 06:32:27 +0000 (14:32 +0800)]
KVM: x86/xen: Remove unneeded semicolon

The semicolon after the "}" is unneeded.

Signed-off-by: zhang songyi <zhang.songyi@zte.com.cn>
Link: https://lore.kernel.org/r/202212191432274558936@zte.com.cn
Signed-off-by: Sean Christopherson <seanjc@google.com>
18 months agoKVM: selftests: x86: Use host's native hypercall instruction in kvm_hypercall()
Vishal Annapurve [Wed, 11 Jan 2023 00:44:45 +0000 (00:44 +0000)]
KVM: selftests: x86: Use host's native hypercall instruction in kvm_hypercall()

Use the host CPU's native hypercall instruction, i.e. VMCALL vs. VMMCALL,
in kvm_hypercall(), as relying on KVM to patch in the native hypercall on
a #UD for the "wrong" hypercall requires KVM_X86_QUIRK_FIX_HYPERCALL_INSN
to be enabled and flat out doesn't work if guest memory is encrypted with
a private key, e.g. for SEV VMs.

Suggested-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: David Matlack <dmatlack@google.com>
Signed-off-by: Vishal Annapurve <vannapurve@google.com>
Link: https://lore.kernel.org/r/20230111004445.416840-4-vannapurve@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
18 months agoKVM: selftests: x86: Cache host CPU vendor (AMD vs. Intel)
Vishal Annapurve [Wed, 11 Jan 2023 00:44:44 +0000 (00:44 +0000)]
KVM: selftests: x86: Cache host CPU vendor (AMD vs. Intel)

Cache the host CPU vendor for userspace and share it with guest code.

All the current callers of this_cpu* actually care about host cpu so
they are updated to check host_cpu_is*.

Suggested-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: David Matlack <dmatlack@google.com>
Signed-off-by: Vishal Annapurve <vannapurve@google.com>
Link: https://lore.kernel.org/r/20230111004445.416840-3-vannapurve@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
18 months agoKVM: selftests: x86: Use "this_cpu" prefix for cpu vendor queries
Vishal Annapurve [Wed, 11 Jan 2023 00:44:43 +0000 (00:44 +0000)]
KVM: selftests: x86: Use "this_cpu" prefix for cpu vendor queries

Replace is_intel/amd_cpu helpers with this_cpu_* helpers to better
convey the intent of querying vendor of the current cpu.

Suggested-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: David Matlack <dmatlack@google.com>
Signed-off-by: Vishal Annapurve <vannapurve@google.com>
Link: https://lore.kernel.org/r/20230111004445.416840-2-vannapurve@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
18 months agoKVM: selftests: Fix a typo in the vcpu_msrs_set assert
Aaron Lewis [Fri, 9 Dec 2022 20:13:27 +0000 (20:13 +0000)]
KVM: selftests: Fix a typo in the vcpu_msrs_set assert

The assert incorrectly identifies the ioctl being called.  Switch it
from KVM_GET_MSRS to KVM_SET_MSRS.

Fixes: 6ebfef83f03f ("KVM: selftest: Add proper helpers for x86-specific save/restore ioctls")
Signed-off-by: Aaron Lewis <aaronlewis@google.com>
Reviewed-by: Jim Mattson <jmattson@google.com>
Reviewed-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20221209201326.2781950-1-aaronlewis@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
18 months agoKVM: selftests: kvm_vm_elf_load() and elfhdr_get() should close fd
Reiji Watanabe [Tue, 20 Dec 2022 17:09:21 +0000 (09:09 -0800)]
KVM: selftests: kvm_vm_elf_load() and elfhdr_get() should close fd

kvm_vm_elf_load() and elfhdr_get() open one file each, but they
never close the opened file descriptor.  If a test repeatedly
creates and destroys a VM with __vm_create(), which
(directly or indirectly) calls those two functions, the test
might end up getting a open failure with EMFILE.
Fix those two functions to close the file descriptor.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
Reviewed-by: Oliver Upton <oliver.upton@linux.dev>
Reviewed-by: Andrew Jones <andrew.jones@linux.dev>
Reviewed-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20221220170921.2499209-2-reijiw@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
18 months agoKVM: selftests: Test masked events in PMU filter
Aaron Lewis [Tue, 20 Dec 2022 16:12:36 +0000 (16:12 +0000)]
KVM: selftests: Test masked events in PMU filter

Add testing to show that a pmu event can be filtered with a generalized
match on it's unit mask.

These tests set up test cases to demonstrate various ways of filtering
a pmu event that has multiple unit mask values.  It does this by
setting up the filter in KVM with the masked events provided, then
enabling three pmu counters in the guest.  The test then verifies that
the pmu counters agree with which counters should be counting and which
counters should be filtered for both a sparse filter list and a dense
filter list.

Signed-off-by: Aaron Lewis <aaronlewis@google.com>
Link: https://lore.kernel.org/r/20221220161236.555143-8-aaronlewis@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
18 months agoKVM: selftests: Add testing for KVM_SET_PMU_EVENT_FILTER
Aaron Lewis [Tue, 20 Dec 2022 16:12:35 +0000 (16:12 +0000)]
KVM: selftests: Add testing for KVM_SET_PMU_EVENT_FILTER

Test that masked events are not using invalid bits, and if they are,
ensure the pmu event filter is not accepted by KVM_SET_PMU_EVENT_FILTER.
The only valid bits that can be used for masked events are set when
using KVM_PMU_ENCODE_MASKED_ENTRY() with one exception: If any of the
high bits (35:32) of the event select are set when using Intel, the pmu
event filter will fail.

Also, because validation was not being done prior to the introduction
of masked events, only expect validation to fail when masked events
are used.  E.g. in the first test a filter event with all its bits set
is accepted by KVM_SET_PMU_EVENT_FILTER when flags = 0.

Signed-off-by: Aaron Lewis <aaronlewis@google.com>
Link: https://lore.kernel.org/r/20221220161236.555143-7-aaronlewis@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
18 months agoKVM: selftests: Add flags when creating a pmu event filter
Aaron Lewis [Tue, 20 Dec 2022 16:12:34 +0000 (16:12 +0000)]
KVM: selftests: Add flags when creating a pmu event filter

Now that the flags field can be non-zero, pass it in when creating a
pmu event filter.

This is needed in preparation for testing masked events.

No functional change intended.

Signed-off-by: Aaron Lewis <aaronlewis@google.com>
Link: https://lore.kernel.org/r/20221220161236.555143-6-aaronlewis@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
18 months agoKVM: x86/pmu: Introduce masked events to the pmu event filter
Aaron Lewis [Tue, 20 Dec 2022 16:12:33 +0000 (16:12 +0000)]
KVM: x86/pmu: Introduce masked events to the pmu event filter

When building a list of filter events, it can sometimes be a challenge
to fit all the events needed to adequately restrict the guest into the
limited space available in the pmu event filter.  This stems from the
fact that the pmu event filter requires each event (i.e. event select +
unit mask) be listed, when the intention might be to restrict the
event select all together, regardless of it's unit mask.  Instead of
increasing the number of filter events in the pmu event filter, add a
new encoding that is able to do a more generalized match on the unit mask.

Introduce masked events as another encoding the pmu event filter
understands.  Masked events has the fields: mask, match, and exclude.
When filtering based on these events, the mask is applied to the guest's
unit mask to see if it matches the match value (i.e. umask & mask ==
match).  The exclude bit can then be used to exclude events from that
match.  E.g. for a given event select, if it's easier to say which unit
mask values shouldn't be filtered, a masked event can be set up to match
all possible unit mask values, then another masked event can be set up to
match the unit mask values that shouldn't be filtered.

Userspace can query to see if this feature exists by looking for the
capability, KVM_CAP_PMU_EVENT_MASKED_EVENTS.

This feature is enabled by setting the flags field in the pmu event
filter to KVM_PMU_EVENT_FLAG_MASKED_EVENTS.

Events can be encoded by using KVM_PMU_ENCODE_MASKED_ENTRY().

It is an error to have a bit set outside the valid bits for a masked
event, and calls to KVM_SET_PMU_EVENT_FILTER will return -EINVAL in
such cases, including the high bits of the event select (35:32) if
called on Intel.

With these updates the filter matching code has been updated to match on
a common event.  Masked events were flexible enough to handle both event
types, so they were used as the common event.  This changes how guest
events get filtered because regardless of the type of event used in the
uAPI, they will be converted to masked events.  Because of this there
could be a slight performance hit because instead of matching the filter
event with a lookup on event select + unit mask, it does a lookup on event
select then walks the unit masks to find the match.  This shouldn't be a
big problem because I would expect the set of common event selects to be
small, and if they aren't the set can likely be reduced by using masked
events to generalize the unit mask.  Using one type of event when
filtering guest events allows for a common code path to be used.

Signed-off-by: Aaron Lewis <aaronlewis@google.com>
Link: https://lore.kernel.org/r/20221220161236.555143-5-aaronlewis@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
18 months agoKVM: x86/pmu: prepare the pmu event filter for masked events
Aaron Lewis [Tue, 20 Dec 2022 16:12:32 +0000 (16:12 +0000)]
KVM: x86/pmu: prepare the pmu event filter for masked events

Refactor check_pmu_event_filter() in preparation for masked events.

No functional changes intended

Signed-off-by: Aaron Lewis <aaronlewis@google.com>
Link: https://lore.kernel.org/r/20221220161236.555143-4-aaronlewis@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
18 months agoKVM: x86/pmu: Remove impossible events from the pmu event filter
Aaron Lewis [Tue, 20 Dec 2022 16:12:31 +0000 (16:12 +0000)]
KVM: x86/pmu: Remove impossible events from the pmu event filter

If it's not possible for an event in the pmu event filter to match a
pmu event being programmed by the guest, it's pointless to have it in
the list.  Opt for a shorter list by removing those events.

Because this is established uAPI the pmu event filter can't outright
rejected these events as garbage and return an error.  Instead, play
nice and remove them from the list.

Also, opportunistically rewrite the comment when the filter is set to
clarify that it guards against *all* TOCTOU attacks on the verified
data.

Signed-off-by: Aaron Lewis <aaronlewis@google.com>
Link: https://lore.kernel.org/r/20221220161236.555143-3-aaronlewis@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
18 months agoKVM: x86/pmu: Correct the mask used in a pmu event filter lookup
Aaron Lewis [Tue, 20 Dec 2022 16:12:30 +0000 (16:12 +0000)]
KVM: x86/pmu: Correct the mask used in a pmu event filter lookup

When checking if a pmu event the guest is attempting to program should
be filtered, only consider the event select + unit mask in that
decision. Use an architecture specific mask to mask out all other bits,
including bits 35:32 on Intel.  Those bits are not part of the event
select and should not be considered in that decision.

Fixes: 66bb8a065f5a ("KVM: x86: PMU Event Filter")
Signed-off-by: Aaron Lewis <aaronlewis@google.com>
Link: https://lore.kernel.org/r/20221220161236.555143-2-aaronlewis@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
18 months agoKVM: x86/mmu: Use kstrtobool() instead of strtobool()
Christophe JAILLET [Sat, 14 Jan 2023 09:39:11 +0000 (10:39 +0100)]
KVM: x86/mmu: Use kstrtobool() instead of strtobool()

strtobool() is the same as kstrtobool().
However, the latter is more used within the kernel.

In order to remove strtobool() and slightly simplify kstrtox.h, switch to
the other function name.

While at it, include the corresponding header file (<linux/kstrtox.h>)

Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Link: https://lore.kernel.org/r/670882aa04dbdd171b46d3b20ffab87158454616.1673689135.git.christophe.jaillet@wanadoo.fr
Signed-off-by: Sean Christopherson <seanjc@google.com>
18 months agoKVM: x86/mmu: Cleanup range-based flushing for given page
Hou Wenlong [Mon, 10 Oct 2022 12:19:17 +0000 (20:19 +0800)]
KVM: x86/mmu: Cleanup range-based flushing for given page

Use the new kvm_flush_remote_tlbs_gfn() helper to cleanup the call sites
of range-based flushing for given page, which makes the code clear.

Signed-off-by: Hou Wenlong <houwenlong.hwl@antgroup.com>
Link: https://lore.kernel.org/r/593ee1a876ece0e819191c0b23f56b940d6686db.1665214747.git.houwenlong.hwl@antgroup.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
18 months agoKVM: x86/mmu: Fix wrong gfn range of tlb flushing in validate_direct_spte()
Hou Wenlong [Mon, 10 Oct 2022 12:19:16 +0000 (20:19 +0800)]
KVM: x86/mmu: Fix wrong gfn range of tlb flushing in validate_direct_spte()

The spte pointing to the children SP is dropped, so the whole gfn range
covered by the children SP should be flushed. Although, Hyper-V may
treat a 1-page flush the same if the address points to a huge page, it
still would be better to use the correct size of huge page.

Fixes: c3134ce240eed ("KVM: Replace old tlb flush function with new one to flush a specified range.")
Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Hou Wenlong <houwenlong.hwl@antgroup.com>
Link: https://lore.kernel.org/r/5f297c566f7d7ff2ea6da3c66d050f69ce1b8ede.1665214747.git.houwenlong.hwl@antgroup.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
18 months agoKVM: x86/mmu: Fix wrong start gfn of tlb flushing with range
Hou Wenlong [Mon, 10 Oct 2022 12:19:15 +0000 (20:19 +0800)]
KVM: x86/mmu: Fix wrong start gfn of tlb flushing with range

When a spte is dropped, the start gfn of tlb flushing should be the gfn
of spte not the base gfn of SP which contains the spte. Also introduce a
helper function to do range-based flushing when a spte is dropped, which
would help prevent future buggy use of
kvm_flush_remote_tlbs_with_address() in such case.

Fixes: c3134ce240eed ("KVM: Replace old tlb flush function with new one to flush a specified range.")
Suggested-by: David Matlack <dmatlack@google.com>
Signed-off-by: Hou Wenlong <houwenlong.hwl@antgroup.com>
Link: https://lore.kernel.org/r/72ac2169a261976f00c1703e88cda676dfb960f5.1665214747.git.houwenlong.hwl@antgroup.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
18 months agoKVM: x86/mmu: Reduce gfn range of tlb flushing in tdp_mmu_map_handle_target_level()
Hou Wenlong [Mon, 10 Oct 2022 12:19:14 +0000 (20:19 +0800)]
KVM: x86/mmu: Reduce gfn range of tlb flushing in tdp_mmu_map_handle_target_level()

Since the children SP is zapped, the gfn range of tlb flushing should be
the range covered by children SP not parent SP. Replace sp->gfn which is
the base gfn of parent SP with iter->gfn and use the correct size of gfn
range for children SP to reduce tlb flushing range.

Fixes: bb95dfb9e2df ("KVM: x86/mmu: Defer TLB flush to caller when freeing TDP MMU shadow pages")
Signed-off-by: Hou Wenlong <houwenlong.hwl@antgroup.com>
Reviewed-by: David Matlack <dmatlack@google.com>
Link: https://lore.kernel.org/r/528ab9c784a486e9ce05f61462ad9260796a8732.1665214747.git.houwenlong.hwl@antgroup.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
18 months agoKVM: x86/mmu: Fix wrong gfn range of tlb flushing in kvm_set_pte_rmapp()
Hou Wenlong [Mon, 10 Oct 2022 12:19:13 +0000 (20:19 +0800)]
KVM: x86/mmu: Fix wrong gfn range of tlb flushing in kvm_set_pte_rmapp()

When the spte of hupe page is dropped in kvm_set_pte_rmapp(), the whole
gfn range covered by the spte should be flushed. However,
rmap_walk_init_level() doesn't align down the gfn for new level like tdp
iterator does, then the gfn used in kvm_set_pte_rmapp() is not the base
gfn of huge page. And the size of gfn range is wrong too for huge page.
Use the base gfn of huge page and the size of huge page for flushing
tlbs for huge page. Also introduce a helper function to flush the given
page (huge or not) of guest memory, which would help prevent future
buggy use of kvm_flush_remote_tlbs_with_address() in such case.

Fixes: c3134ce240eed ("KVM: Replace old tlb flush function with new one to flush a specified range.")
Signed-off-by: Hou Wenlong <houwenlong.hwl@antgroup.com>
Link: https://lore.kernel.org/r/0ce24d7078fa5f1f8d64b0c59826c50f32f8065e.1665214747.git.houwenlong.hwl@antgroup.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
18 months agoKVM: x86/mmu: Move round_gfn_for_level() helper into mmu_internal.h
Hou Wenlong [Mon, 10 Oct 2022 12:19:12 +0000 (20:19 +0800)]
KVM: x86/mmu: Move round_gfn_for_level() helper into mmu_internal.h

Rounding down the GFN to a huge page size is a common pattern throughout
KVM, so move round_gfn_for_level() helper in tdp_iter.c to
mmu_internal.h for common usage. Also rename it as gfn_round_for_level()
to use gfn_* prefix and clean up the other call sites.

Signed-off-by: Hou Wenlong <houwenlong.hwl@antgroup.com>
Link: https://lore.kernel.org/r/415c64782f27444898db650e21cf28eeb6441dfa.1665214747.git.houwenlong.hwl@antgroup.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
18 months agoKVM: x86/mmu: fix an incorrect comment in kvm_mmu_new_pgd()
Wei Liu [Mon, 28 Nov 2022 21:47:09 +0000 (21:47 +0000)]
KVM: x86/mmu: fix an incorrect comment in kvm_mmu_new_pgd()

There is no function named kvm_mmu_ensure_valid_pgd().

Fix the comment and remove the pair of braces to conform to Linux kernel
coding style.

Signed-off-by: Wei Liu <wei.liu@kernel.org>
Reviewed-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20221128214709.224710-1-wei.liu@kernel.org
Signed-off-by: Sean Christopherson <seanjc@google.com>
18 months agokvm: x86/mmu: Don't clear write flooding for direct SP
Lai Jiangshan [Thu, 5 Jan 2023 10:03:10 +0000 (18:03 +0800)]
kvm: x86/mmu: Don't clear write flooding for direct SP

Although there is no harm, but there is no point to clear write
flooding for direct SP.

Signed-off-by: Lai Jiangshan <jiangshan.ljs@antgroup.com>
Link: https://lore.kernel.org/r/20230105100310.6700-1-jiangshanlai@gmail.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
18 months agokvm: x86/mmu: Rename SPTE_TDP_AD_ENABLED_MASK to SPTE_TDP_AD_ENABLED
Lai Jiangshan [Thu, 5 Jan 2023 10:02:03 +0000 (18:02 +0800)]
kvm: x86/mmu: Rename SPTE_TDP_AD_ENABLED_MASK to SPTE_TDP_AD_ENABLED

SPTE_TDP_AD_ENABLED_MASK, SPTE_TDP_AD_DISABLED_MASK and
SPTE_TDP_AD_WRPROT_ONLY_MASK are actual value, not mask.

Remove "MASK" from their names.

Signed-off-by: Lai Jiangshan <jiangshan.ljs@antgroup.com>
Link: https://lore.kernel.org/r/20230105100204.6521-1-jiangshanlai@gmail.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
18 months agox86/reboot: Disable SVM, not just VMX, when stopping CPUs
Sean Christopherson [Wed, 30 Nov 2022 23:36:50 +0000 (23:36 +0000)]
x86/reboot: Disable SVM, not just VMX, when stopping CPUs

Disable SVM and more importantly force GIF=1 when halting a CPU or
rebooting the machine.  Similar to VMX, SVM allows software to block
INITs via CLGI, and thus can be problematic for a crash/reboot.  The
window for failure is smaller with SVM as INIT is only blocked while
GIF=0, i.e. between CLGI and STGI, but the window does exist.

Fixes: fba4f472b33a ("x86/reboot: Turn off KVM when halting a CPU")
Cc: stable@vger.kernel.org
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Link: https://lore.kernel.org/r/20221130233650.1404148-5-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
18 months agox86/reboot: Disable virtualization in an emergency if SVM is supported
Sean Christopherson [Wed, 30 Nov 2022 23:36:49 +0000 (23:36 +0000)]
x86/reboot: Disable virtualization in an emergency if SVM is supported

Disable SVM on all CPUs via NMI shootdown during an emergency reboot.
Like VMX, SVM can block INIT, e.g. if the emergency reboot is triggered
between CLGI and STGI, and thus can prevent bringing up other CPUs via
INIT-SIPI-SIPI.

Cc: stable@vger.kernel.org
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Link: https://lore.kernel.org/r/20221130233650.1404148-4-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
18 months agox86/virt: Force GIF=1 prior to disabling SVM (for reboot flows)
Sean Christopherson [Wed, 30 Nov 2022 23:36:48 +0000 (23:36 +0000)]
x86/virt: Force GIF=1 prior to disabling SVM (for reboot flows)

Set GIF=1 prior to disabling SVM to ensure that INIT is recognized if the
kernel is disabling SVM in an emergency, e.g. if the kernel is about to
jump into a crash kernel or may reboot without doing a full CPU RESET.
If GIF is left cleared, the new kernel (or firmware) will be unabled to
awaken APs.  Eat faults on STGI (due to EFER.SVME=0) as it's possible
that SVM could be disabled via NMI shootdown between reading EFER.SVME
and executing STGI.

Link: https://lore.kernel.org/all/cbcb6f35-e5d7-c1c9-4db9-fe5cc4de579a@amd.com
Cc: stable@vger.kernel.org
Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Link: https://lore.kernel.org/r/20221130233650.1404148-3-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
18 months agox86/crash: Disable virt in core NMI crash handler to avoid double shootdown
Sean Christopherson [Wed, 30 Nov 2022 23:36:47 +0000 (23:36 +0000)]
x86/crash: Disable virt in core NMI crash handler to avoid double shootdown

Disable virtualization in crash_nmi_callback() and rework the
emergency_vmx_disable_all() path to do an NMI shootdown if and only if a
shootdown has not already occurred.   NMI crash shootdown fundamentally
can't support multiple invocations as responding CPUs are deliberately
put into halt state without unblocking NMIs.  But, the emergency reboot
path doesn't have any work of its own, it simply cares about disabling
virtualization, i.e. so long as a shootdown occurred, emergency reboot
doesn't care who initiated the shootdown, or when.

If "crash_kexec_post_notifiers" is specified on the kernel command line,
panic() will invoke crash_smp_send_stop() and result in a second call to
nmi_shootdown_cpus() during native_machine_emergency_restart().

Invoke the callback _before_ disabling virtualization, as the current
VMCS needs to be cleared before doing VMXOFF.  Note, this results in a
subtle change in ordering between disabling virtualization and stopping
Intel PT on the responding CPUs.  While VMX and Intel PT do interact,
VMXOFF and writes to MSR_IA32_RTIT_CTL do not induce faults between one
another, which is all that matters when panicking.

Harden nmi_shootdown_cpus() against multiple invocations to try and
capture any such kernel bugs via a WARN instead of hanging the system
during a crash/dump, e.g. prior to the recent hardening of
register_nmi_handler(), re-registering the NMI handler would trigger a
double list_add() and hang the system if CONFIG_BUG_ON_DATA_CORRUPTION=y.

 list_add double add: new=ffffffff82220800, prev=ffffffff8221cfe8, next=ffffffff82220800.
 WARNING: CPU: 2 PID: 1319 at lib/list_debug.c:29 __list_add_valid+0x67/0x70
 Call Trace:
  __register_nmi_handler+0xcf/0x130
  nmi_shootdown_cpus+0x39/0x90
  native_machine_emergency_restart+0x1c9/0x1d0
  panic+0x237/0x29b

Extract the disabling logic to a common helper to deduplicate code, and
to prepare for doing the shootdown in the emergency reboot path if SVM
is supported.

Note, prior to commit ed72736183c4 ("x86/reboot: Force all cpus to exit
VMX root if VMX is supported"), nmi_shootdown_cpus() was subtly protected
against a second invocation by a cpu_vmx_enabled() check as the kdump
handler would disable VMX if it ran first.

Fixes: ed72736183c4 ("x86/reboot: Force all cpus to exit VMX root if VMX is supported")
Cc: stable@vger.kernel.org
Reported-by: Guilherme G. Piccoli <gpiccoli@igalia.com>
Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Link: https://lore.kernel.org/all/20220427224924.592546-2-gpiccoli@igalia.com
Tested-by: Guilherme G. Piccoli <gpiccoli@igalia.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Link: https://lore.kernel.org/r/20221130233650.1404148-2-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
18 months agoKVM: x86/xen: update Xen CPUID Leaf 4 (tsc info) sub-leaves, if present
Paul Durrant [Fri, 6 Jan 2023 10:36:00 +0000 (10:36 +0000)]
KVM: x86/xen: update Xen CPUID Leaf 4 (tsc info) sub-leaves, if present

The scaling information in subleaf 1 should match the values set by KVM in
the 'vcpu_info' sub-structure 'time_info' (a.k.a. pvclock_vcpu_time_info)
which is shared with the guest, but is not directly available to the VMM.
The offset values are not set since a TSC offset is already applied.
The TSC frequency should also be set in sub-leaf 2.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
Reviewed-by: David Woodhouse <dwmw@amazon.co.uk>
Link: https://lore.kernel.org/r/20230106103600.528-3-pdurrant@amazon.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
18 months agoKVM: x86/cpuid: generalize kvm_update_kvm_cpuid_base() and also capture limit
Paul Durrant [Fri, 6 Jan 2023 10:35:59 +0000 (10:35 +0000)]
KVM: x86/cpuid: generalize kvm_update_kvm_cpuid_base() and also capture limit

A subsequent patch will need to acquire the CPUID leaf range for emulated
Xen so explicitly pass the signature of the hypervisor we're interested in
to the new function. Also introduce a new kvm_hypervisor_cpuid structure
so we can neatly store both the base and limit leaf indices.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
Reviewed-by: David Woodhouse <dwmw@amazon.co.uk>
Link: https://lore.kernel.org/r/20230106103600.528-2-pdurrant@amazon.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
18 months agoKVM: x86: Replace cpu_dirty_logging_count with nr_memslots_dirty_logging
David Matlack [Thu, 5 Jan 2023 21:43:03 +0000 (13:43 -0800)]
KVM: x86: Replace cpu_dirty_logging_count with nr_memslots_dirty_logging

Drop cpu_dirty_logging_count in favor of nr_memslots_dirty_logging.
Both fields count the number of memslots that have dirty-logging enabled,
with the only difference being that cpu_dirty_logging_count is only
incremented when using PML. So while nr_memslots_dirty_logging is not a
direct replacement for cpu_dirty_logging_count, it can be combined with
enable_pml to get the same information.

Signed-off-by: David Matlack <dmatlack@google.com>
Link: https://lore.kernel.org/r/20230105214303.2919415-1-dmatlack@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
18 months agoKVM: x86: Replace 0-length arrays with flexible arrays
Kees Cook [Wed, 18 Jan 2023 19:59:09 +0000 (11:59 -0800)]
KVM: x86: Replace 0-length arrays with flexible arrays

Zero-length arrays are deprecated[1]. Replace struct kvm_nested_state's
"data" union 0-length arrays with flexible arrays. (How are the
sizes of these arrays verified?) Detected with GCC 13, using
-fstrict-flex-arrays=3:

arch/x86/kvm/svm/nested.c: In function 'svm_get_nested_state':
arch/x86/kvm/svm/nested.c:1536:17: error: array subscript 0 is outside array bounds of 'struct kvm_svm_nested_state_data[0]' [-Werror=array-bounds=]
 1536 |                 &user_kvm_nested_state->data.svm[0];
      |                 ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from include/uapi/linux/kvm.h:15,
                 from include/linux/kvm_host.h:40,
                 from arch/x86/kvm/svm/nested.c:18:
arch/x86/include/uapi/asm/kvm.h:511:50: note: while referencing 'svm'
  511 |                 struct kvm_svm_nested_state_data svm[0];
      |                                                  ^~~

[1] https://www.kernel.org/doc/html/latest/process/deprecated.html#zero-length-and-one-element-arrays

Cc: Sean Christopherson <seanjc@google.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: "Gustavo A. R. Silva" <gustavoars@kernel.org>
Cc: x86@kernel.org
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: kvm@vger.kernel.org
Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20230105190548.never.323-kees@kernel.org
Link: https://lore.kernel.org/r/20230118195905.gonna.693-kees@kernel.org
Signed-off-by: Sean Christopherson <seanjc@google.com>
18 months agoKVM: x86: Advertise fast REP string features inherent to the CPU
Jim Mattson [Thu, 1 Sep 2022 21:18:07 +0000 (14:18 -0700)]
KVM: x86: Advertise fast REP string features inherent to the CPU

Fast zero-length REP MOVSB, fast short REP STOSB, and fast short REP
{CMPSB,SCASB} are inherent features of the processor that cannot be
hidden by the hypervisor. When these features are present on the host,
enumerate them in KVM_GET_SUPPORTED_CPUID.

Signed-off-by: Jim Mattson <jmattson@google.com>
Reviewed-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20220901211811.2883855-2-jmattson@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
18 months agox86/cpufeatures: Add macros for Intel's new fast rep string features
Jim Mattson [Thu, 1 Sep 2022 21:18:06 +0000 (14:18 -0700)]
x86/cpufeatures: Add macros for Intel's new fast rep string features

KVM_GET_SUPPORTED_CPUID should reflect these host CPUID bits. The bits
are already cached in word 12. Give the bits X86_FEATURE names, so
that they can be easily referenced. Hide these bits from
/proc/cpuinfo, since the host kernel makes no use of them at present.

Signed-off-by: Jim Mattson <jmattson@google.com>
Reviewed-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20220901211811.2883855-1-jmattson@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
18 months agokvm_host.h: fix spelling typo in function declaration
Wang Liang [Tue, 20 Sep 2022 06:02:10 +0000 (14:02 +0800)]
kvm_host.h: fix spelling typo in function declaration

Make parameters in function declaration consistent with
those in function definition for better cscope-ability

Signed-off-by: Wang Liang <wangliangzz@inspur.com>
Reviewed-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20220920060210.4842-1-wangliangzz@126.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
18 months agoKVM: account allocation in generic version of kvm_arch_alloc_vm()
Alexey Dobriyan [Thu, 17 Nov 2022 20:34:19 +0000 (23:34 +0300)]
KVM: account allocation in generic version of kvm_arch_alloc_vm()

Account the allocation of VMs in the generic version of
kvm_arch_alloc_vm(), the VM is tied to the current task/process.

Note, x86 already accounts its allocation.

Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Reviewed-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/Y3aay2u2KQgiR0un@p183
[sean: reworded changelog]
Signed-off-by: Sean Christopherson <seanjc@google.com>
18 months agoKVM: PPC: Fix refactoring goof in kvmppc_e500mc_init()
Sean Christopherson [Thu, 19 Jan 2023 18:21:58 +0000 (18:21 +0000)]
KVM: PPC: Fix refactoring goof in kvmppc_e500mc_init()

Fix a build error due to a mixup during a recent refactoring.  The error
was reported during code review, but the fixed up patch didn't make it
into the final commit.

Fixes: 474856bad921 ("KVM: PPC: Move processor compatibility check to module init")
Link: https://lore.kernel.org/all/87cz93snqc.fsf@mpe.ellerman.id.au
Cc: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20230119182158.4026656-1-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
18 months agoMerge branch 'kvm-lapic-fix-and-cleanup' into HEAD
Paolo Bonzini [Tue, 27 Dec 2022 12:56:16 +0000 (07:56 -0500)]
Merge branch 'kvm-lapic-fix-and-cleanup' into HEAD

The first half or so patches fix semi-urgent, real-world relevant APICv
and AVIC bugs.

The second half fixes a variety of AVIC and optimized APIC map bugs
where KVM doesn't play nice with various edge cases that are
architecturally legal(ish), but are unlikely to occur in most real world
scenarios

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
18 months agoMerge branch 'kvm-v6.2-rc4-fixes' into HEAD
Paolo Bonzini [Fri, 13 Jan 2023 16:27:55 +0000 (11:27 -0500)]
Merge branch 'kvm-v6.2-rc4-fixes' into HEAD

ARM:

* Fix the PMCR_EL0 reset value after the PMU rework

* Correctly handle S2 fault triggered by a S1 page table walk
  by not always classifying it as a write, as this breaks on
  R/O memslots

* Document why we cannot exit with KVM_EXIT_MMIO when taking
  a write fault from a S1 PTW on a R/O memslot

* Put the Apple M2 on the naughty list for not being able to
  correctly implement the vgic SEIS feature, just like the M1
  before it

* Reviewer updates: Alex is stepping down, replaced by Zenghui

x86:

* Fix various rare locking issues in Xen emulation and teach lockdep
  to detect them

* Documentation improvements

* Do not return host topology information from KVM_GET_SUPPORTED_CPUID

18 months agoMerge branch 'kvm-hw-enable-refactor' into HEAD
Paolo Bonzini [Tue, 24 Jan 2023 10:57:17 +0000 (05:57 -0500)]
Merge branch 'kvm-hw-enable-refactor' into HEAD

The main theme of this series is to kill off kvm_arch_init(),
kvm_arch_hardware_(un)setup(), and kvm_arch_check_processor_compat(), which
all originated in x86 code from way back when, and needlessly complicate
both common KVM code and architecture code.  E.g. many architectures don't
mark functions/data as __init/__ro_after_init purely because kvm_init()
isn't marked __init to support x86's separate vendor modules.

The idea/hope is that with those hooks gone (moved to arch code), it will
be easier for x86 (and other architectures) to modify their module init
sequences as needed without having to fight common KVM code.  E.g. I'm
hoping that ARM can build on this to simplify its hardware enabling logic,
especially the pKVM side of things.

There are bug fixes throughout this series.  They are more scattered than
I would usually prefer, but getting the sequencing correct was a gigantic
pain for many of the x86 fixes due to needing to fix common code in order
for the x86 fix to have any meaning.  And while the bugs are often fatal,
they aren't all that interesting for most users as they either require a
malicious admin or broken hardware, i.e. aren't likely to be encountered
by the vast majority of KVM users.  So unless someone _really_ wants a
particular fix isolated for backporting, I'm not planning on shuffling
patches.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
18 months agoKVM: x86: Add helpers to recalc physical vs. logical optimized APIC maps
Sean Christopherson [Fri, 6 Jan 2023 01:13:06 +0000 (01:13 +0000)]
KVM: x86: Add helpers to recalc physical vs. logical optimized APIC maps

Move the guts of kvm_recalculate_apic_map()'s main loop to two separate
helpers to handle recalculating the physical and logical pieces of the
optimized map.  Having 100+ lines of code in the for-loop makes it hard
to understand what is being calculated where.

No functional change intended.

Suggested-by: Maxim Levitsky <mlevitsk@redhat.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20230106011306.85230-34-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
18 months agoKVM: x86: Allow APICv APIC ID inhibit to be cleared
Greg Edwards [Fri, 6 Jan 2023 01:13:05 +0000 (01:13 +0000)]
KVM: x86: Allow APICv APIC ID inhibit to be cleared

Legacy kernels prior to commit 4399c03c6780 ("x86/apic: Remove
verify_local_APIC()") write the APIC ID of the boot CPU twice to verify
a functioning local APIC.  This results in APIC acceleration inhibited
on these kernels for reason APICV_INHIBIT_REASON_APIC_ID_MODIFIED.

Allow the APICV_INHIBIT_REASON_APIC_ID_MODIFIED inhibit reason to be
cleared if/when all APICs in xAPIC mode set their APIC ID back to the
expected vcpu_id value.

Fold the functionality previously in kvm_lapic_xapic_id_updated() into
kvm_recalculate_apic_map(), as this allows examining all APICs in one
pass.

Fixes: 3743c2f02517 ("KVM: x86: inhibit APICv/AVIC on changes to APIC ID or APIC base")
Signed-off-by: Greg Edwards <gedwards@ddn.com>
Link: https://lore.kernel.org/r/20221117183247.94314-1-gedwards@ddn.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20230106011306.85230-33-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
18 months agoKVM: x86: Track required APICv inhibits with variable, not callback
Sean Christopherson [Fri, 6 Jan 2023 01:13:04 +0000 (01:13 +0000)]
KVM: x86: Track required APICv inhibits with variable, not callback

Track the per-vendor required APICv inhibits with a variable instead of
calling into vendor code every time KVM wants to query the set of
required inhibits.  The required inhibits are a property of the vendor's
virtualization architecture, i.e. are 100% static.

Using a variable allows the compiler to inline the check, e.g. generate
a single-uop TEST+Jcc, and thus eliminates any desire to avoid checking
inhibits for performance reasons.

No functional change intended.

Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20230106011306.85230-32-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
18 months agoRevert "KVM: SVM: Do not throw warning when calling avic_vcpu_load on a running vcpu"
Sean Christopherson [Fri, 6 Jan 2023 01:13:03 +0000 (01:13 +0000)]
Revert "KVM: SVM: Do not throw warning when calling avic_vcpu_load on a running vcpu"

Turns out that some warnings exist for good reasons.  Restore the warning
in avic_vcpu_load() that guards against calling avic_vcpu_load() on a
running vCPU now that KVM avoids doing so when switching between x2APIC
and xAPIC.  The entire point of the WARN is to highlight that KVM should
not be reloading an AVIC.

Opportunistically convert the WARN_ON() to WARN_ON_ONCE() to avoid
spamming the kernel if it does fire.

This reverts commit c0caeee65af3944b7b8abbf566e7cc1fae15c775.

Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20230106011306.85230-31-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
18 months agoKVM: SVM: Ignore writes to Remote Read Data on AVIC write traps
Sean Christopherson [Fri, 6 Jan 2023 01:13:02 +0000 (01:13 +0000)]
KVM: SVM: Ignore writes to Remote Read Data on AVIC write traps

Drop writes to APIC_RRR, a.k.a. Remote Read Data Register, on AVIC
unaccelerated write traps.  The register is read-only and isn't emulated
by KVM.  Sending the register through kvm_apic_write_nodecode() will
result in screaming when x2APIC is enabled due to the unexpected failure
to retrieve the MSR (KVM expects that only "legal" accesses will trap).

Fixes: 4d1d7942e36a ("KVM: SVM: Introduce logic to (de)activate x2AVIC mode")
Signed-off-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Message-Id: <20230106011306.85230-30-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
18 months agoKVM: SVM: Handle multiple logical targets in AVIC kick fastpath
Sean Christopherson [Fri, 6 Jan 2023 01:13:01 +0000 (01:13 +0000)]
KVM: SVM: Handle multiple logical targets in AVIC kick fastpath

Iterate over all target logical IDs in the AVIC kick fastpath instead of
bailing if there is more than one target.  Now that KVM inhibits AVIC if
vCPUs aren't mapped 1:1 with logical IDs, each bit in the destination is
guaranteed to match to at most one vCPU, i.e. iterating over the bitmap
is guaranteed to kick each valid target exactly once.

Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20230106011306.85230-29-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
18 months agoKVM: SVM: Require logical ID to be power-of-2 for AVIC entry
Sean Christopherson [Fri, 6 Jan 2023 01:13:00 +0000 (01:13 +0000)]
KVM: SVM: Require logical ID to be power-of-2 for AVIC entry

Do not modify AVIC's logical ID table if the logical ID portion of the
LDR is not a power-of-2, i.e. if the LDR has multiple bits set.  Taking
only the first bit means that KVM will fail to match MDAs that intersect
with "higher" bits in the "ID"

The "ID" acts as a bitmap, but is referred to as an ID because there's an
implicit, unenforced "requirement" that software only set one bit.  This
edge case is arguably out-of-spec behavior, but KVM cleanly handles it
in all other cases, e.g. the optimized logical map (and AVIC!) is also
disabled in this scenario.

Refactor the code to consolidate the checks, and so that the code looks
more like avic_kick_target_vcpus_fast().

Fixes: 18f40c53e10f ("svm: Add VMEXIT handlers for AVIC")
Cc: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Cc: Maxim Levitsky <mlevitsk@redhat.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20230106011306.85230-28-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
18 months agoKVM: SVM: Update svm->ldr_reg cache even if LDR is "bad"
Sean Christopherson [Fri, 6 Jan 2023 01:12:59 +0000 (01:12 +0000)]
KVM: SVM: Update svm->ldr_reg cache even if LDR is "bad"

Update SVM's cache of the LDR even if the new value is "bad".  Leaving
stale information in the cache can result in KVM missing updates and/or
invalidating the wrong entry, e.g. if avic_invalidate_logical_id_entry()
is triggered after a different vCPU has "claimed" the old LDR.

Fixes: 18f40c53e10f ("svm: Add VMEXIT handlers for AVIC")
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20230106011306.85230-27-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
18 months agoKVM: SVM: Always update local APIC on writes to logical dest register
Sean Christopherson [Fri, 6 Jan 2023 01:12:58 +0000 (01:12 +0000)]
KVM: SVM: Always update local APIC on writes to logical dest register

Update the vCPU's local (virtual) APIC on LDR writes even if the write
"fails".  The APIC needs to recalc the optimized logical map even if the
LDR is invalid or zero, e.g. if the guest clears its LDR, the optimized
map will be left as is and the vCPU will receive interrupts using its
old LDR.

Fixes: 18f40c53e10f ("svm: Add VMEXIT handlers for AVIC")
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20230106011306.85230-26-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
18 months agoKVM: SVM: Inhibit AVIC if vCPUs are aliased in logical mode
Sean Christopherson [Fri, 6 Jan 2023 01:12:57 +0000 (01:12 +0000)]
KVM: SVM: Inhibit AVIC if vCPUs are aliased in logical mode

Inhibit SVM's AVIC if multiple vCPUs are aliased to the same logical ID.
Architecturally, all CPUs whose logical ID matches the MDA are supposed
to receive the interrupt; overwriting existing entries in AVIC's
logical=>physical map can result in missed IPIs.

Fixes: 18f40c53e10f ("svm: Add VMEXIT handlers for AVIC")
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20230106011306.85230-25-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
18 months agoKVM: x86: Inhibit APICv/AVIC if the optimized physical map is disabled
Sean Christopherson [Fri, 6 Jan 2023 01:12:56 +0000 (01:12 +0000)]
KVM: x86: Inhibit APICv/AVIC if the optimized physical map is disabled

Inhibit APICv/AVIC if the optimized physical map is disabled so that KVM
KVM provides consistent APIC behavior if xAPIC IDs are aliased due to
vcpu_id being truncated and the x2APIC hotplug hack isn't enabled.  If
the hotplug hack is disabled, events that are emulated by KVM will follow
architectural behavior (all matching vCPUs receive events, even if the
"match" is due to truncation), whereas APICv and AVIC will deliver events
only to the first matching vCPU, i.e. the vCPU that matches without
truncation.

Note, the "extra" inhibit is needed because  KVM deliberately ignores
mismatches due to truncation when applying the APIC_ID_MODIFIED inhibit
so that large VMs (>255 vCPUs) can run with APICv/AVIC.

Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20230106011306.85230-24-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
18 months agoKVM: x86: Honor architectural behavior for aliased 8-bit APIC IDs
Sean Christopherson [Fri, 6 Jan 2023 01:12:55 +0000 (01:12 +0000)]
KVM: x86: Honor architectural behavior for aliased 8-bit APIC IDs

Apply KVM's hotplug hack if and only if userspace has enabled 32-bit IDs
for x2APIC.  If 32-bit IDs are not enabled, disable the optimized map to
honor x86 architectural behavior if multiple vCPUs shared a physical APIC
ID.  As called out in the changelog that added the hack, all CPUs whose
(possibly truncated) APIC ID matches the target are supposed to receive
the IPI.

  KVM intentionally differs from real hardware, because real hardware
  (Knights Landing) does just "x2apic_id & 0xff" to decide whether to
  accept the interrupt in xAPIC mode and it can deliver one interrupt to
  more than one physical destination, e.g. 0x123 to 0x123 and 0x23.

Applying the hack even when x2APIC is not fully enabled means KVM doesn't
correctly handle scenarios where the guest has aliased xAPIC IDs across
multiple vCPUs, as only the vCPU with the lowest vCPU ID will receive any
interrupts.  It's extremely unlikely any real world guest aliases APIC
IDs, or even modifies APIC IDs, but KVM's behavior is arbitrary, e.g. the
lowest vCPU ID "wins" regardless of which vCPU is "aliasing" and which
vCPU is "normal".

Furthermore, the hack is _not_ guaranteed to work!  The hack works if and
only if the optimized APIC map is successfully allocated.  If the map
allocation fails (unlikely), KVM will fall back to its unoptimized
behavior, which _does_ honor the architectural behavior.

Pivot on 32-bit x2APIC IDs being enabled as that is required to take
advantage of the hotplug hack (see kvm_apic_state_fixup()), i.e. won't
break existing setups unless they are way, way off in the weeds.

And an entry in KVM's errata to document the hack.  Alternatively, KVM
could provide an actual x2APIC quirk and document the hack that way, but
there's unlikely to ever be a use case for disabling the quirk.  Go the
errata route to avoid having to validate a quirk no one cares about.

Fixes: 5bd5db385b3e ("KVM: x86: allow hotplug of VCPU with APIC ID over 0xff")
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20230106011306.85230-23-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
18 months agoKVM: x86: Disable APIC logical map if vCPUs are aliased in logical mode
Sean Christopherson [Fri, 6 Jan 2023 01:12:54 +0000 (01:12 +0000)]
KVM: x86: Disable APIC logical map if vCPUs are aliased in logical mode

Disable the optimized APIC logical map if multiple vCPUs are aliased to
the same logical ID.  Architecturally, all CPUs whose logical ID matches
the MDA are supposed to receive the interrupt; overwriting existing map
entries can result in missed IPIs.

Fixes: 1e08ec4a130e ("KVM: optimize apic interrupt delivery")
Signed-off-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Message-Id: <20230106011306.85230-22-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
18 months agoKVM: x86: Disable APIC logical map if logical ID covers multiple MDAs
Sean Christopherson [Fri, 6 Jan 2023 01:12:53 +0000 (01:12 +0000)]
KVM: x86: Disable APIC logical map if logical ID covers multiple MDAs

Disable the optimized APIC logical map if a logical ID covers multiple
MDAs, i.e. if a vCPU has multiple bits set in its ID.  In logical mode,
events match if "ID & MDA != 0", i.e. creating an entry for only the
first bit can cause interrupts to be missed.

Note, creating an entry for every bit is also wrong as KVM would generate
IPIs for every matching bit.  It would be possible to teach KVM to play
nice with this edge case, but it is very much an edge case and probably
not used in any real world OS, i.e. it's not worth optimizing.

Fixes: 1e08ec4a130e ("KVM: optimize apic interrupt delivery")
Signed-off-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Message-Id: <20230106011306.85230-21-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
18 months agoKVM: x86: Skip redundant x2APIC logical mode optimized cluster setup
Sean Christopherson [Fri, 6 Jan 2023 01:12:52 +0000 (01:12 +0000)]
KVM: x86: Skip redundant x2APIC logical mode optimized cluster setup

Skip the optimized cluster[] setup for x2APIC logical mode, as KVM reuses
the optimized map's phys_map[] and doesn't actually need to insert the
target apic into the cluster[].  The LDR is derived from the x2APIC ID,
and both are read-only in KVM, thus the vCPU's cluster[ldr] is guaranteed
to be the same entry as the vCPU's phys_map[x2apic_id] entry.

Skipping the unnecessary setup will allow a future fix for aliased xAPIC
logical IDs to simply require that cluster[ldr] is non-NULL, i.e. won't
have to special case x2APIC.

Alternatively, the future check could allow "cluster[ldr] == apic", but
that ends up being terribly confusing because cluster[ldr] is only set
at the very end, i.e. it's only possible due to x2APIC's shenanigans.

Another alternative would be to send x2APIC down a separate path _after_
the calculation and then assert that all of the above, but the resulting
code is rather messy, and it's arguably unnecessary since asserting that
the actual LDR matches the expected LDR means that simply testing that
interrupts are delivered correctly provides the same guarantees.

Reported-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20230106011306.85230-20-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
18 months agoKVM: x86: Explicitly track all possibilities for APIC map's logical modes
Sean Christopherson [Fri, 6 Jan 2023 01:12:51 +0000 (01:12 +0000)]
KVM: x86: Explicitly track all possibilities for APIC map's logical modes

Track all possibilities for the optimized APIC map's logical modes
instead of overloading the pseudo-bitmap and treating any "unknown" value
as "invalid".

As documented by the now-stale comment above the mode values, the values
did have meaning when the optimized map was originally added.  That
dependent logical was removed by commit e45115b62f9a ("KVM: x86: use
physical LAPIC array for logical x2APIC"), but the obfuscated behavior
and its comment were left behind.

Opportunistically rename "mode" to "logical_mode", partly to make it
clear that the "disabled" case applies only to the logical map, but also
to prove that there is no lurking code that expects "mode" to be a bitmap.

Functionally, this is a glorified nop.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Message-Id: <20230106011306.85230-19-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
18 months agoKVM: x86: Explicitly skip optimized logical map setup if vCPU's LDR==0
Sean Christopherson [Fri, 6 Jan 2023 01:12:50 +0000 (01:12 +0000)]
KVM: x86: Explicitly skip optimized logical map setup if vCPU's LDR==0

Explicitly skip the optimized map setup if the vCPU's LDR is '0', i.e. if
the vCPU will never respond to logical mode interrupts.  KVM already
skips setup in this case, but relies on kvm_apic_map_get_logical_dest()
to generate mask==0.  KVM still needs the mask=0 check as a non-zero LDR
can yield mask==0 depending on the mode, but explicitly handling the LDR
will make it simpler to clean up the logical mode tracking in the future.

No functional change intended.

Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20230106011306.85230-18-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
18 months agoKVM: SVM: Add helper to perform final AVIC "kick" of single vCPU
Sean Christopherson [Fri, 6 Jan 2023 01:12:49 +0000 (01:12 +0000)]
KVM: SVM: Add helper to perform final AVIC "kick" of single vCPU

Add a helper to perform the final kick, two instances of the ICR decoding
is one too many.

No functional change intended.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Message-Id: <20230106011306.85230-17-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
18 months agoKVM: SVM: Document that vCPU ID == APIC ID in AVIC kick fastpatch
Sean Christopherson [Fri, 6 Jan 2023 01:12:48 +0000 (01:12 +0000)]
KVM: SVM: Document that vCPU ID == APIC ID in AVIC kick fastpatch

Document that AVIC is inhibited if any vCPU's APIC ID diverges from its
vCPU ID, i.e. that there's no need to check for a destination match in
the AVIC kick fast path.

Opportunistically tweak comments to remove "guest bug", as that suggests
KVM is punting on error handling, which is not the case.  Targeting a
non-existent vCPU or no vCPUs _may_ be a guest software bug, but whether
or not it's a guest bug is irrelevant.  Such behavior is architecturally
legal and thus needs to faithfully emulated by KVM (and it is).

Signed-off-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Message-Id: <20230106011306.85230-16-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
18 months agoRevert "KVM: SVM: Use target APIC ID to complete x2AVIC IRQs when possible"
Sean Christopherson [Fri, 6 Jan 2023 01:12:47 +0000 (01:12 +0000)]
Revert "KVM: SVM: Use target APIC ID to complete x2AVIC IRQs when possible"

Due to a likely mismerge of patches, KVM ended up with a superfluous
commit to "enable" AVIC's fast path for x2AVIC mode.  Even worse, the
superfluous commit has several bugs and creates a nasty local shadow
variable.

Rather than fix the bugs piece-by-piece[*] to achieve the same end
result, revert the patch wholesale.

Opportunistically add a comment documenting the x2AVIC dependencies.

This reverts commit 8c9e639da435874fb845c4d296ce55664071ea7a.

[*] https://lore.kernel.org/all/YxEP7ZBRIuFWhnYJ@google.com

Fixes: 8c9e639da435 ("KVM: SVM: Use target APIC ID to complete x2AVIC IRQs when possible")
Suggested-by: Maxim Levitsky <mlevitsk@redhat.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20230106011306.85230-15-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
18 months agoKVM: SVM: Fix x2APIC Logical ID calculation for avic_kick_target_vcpus_fast
Suravee Suthikulpanit [Fri, 6 Jan 2023 01:12:46 +0000 (01:12 +0000)]
KVM: SVM: Fix x2APIC Logical ID calculation for avic_kick_target_vcpus_fast

For X2APIC ID in cluster mode, the logical ID is bit [15:0].

Fixes: 603ccef42ce9 ("KVM: x86: SVM: fix avic_kick_target_vcpus_fast")
Cc: Maxim Levitsky <mlevitsk@redhat.com>
Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20230106011306.85230-14-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
18 months agoKVM: SVM: Compute dest based on sender's x2APIC status for AVIC kick
Sean Christopherson [Fri, 6 Jan 2023 01:12:45 +0000 (01:12 +0000)]
KVM: SVM: Compute dest based on sender's x2APIC status for AVIC kick

Compute the destination from ICRH using the sender's x2APIC status, not
each (potential) target's x2APIC status.

Fixes: c514d3a348ac ("KVM: SVM: Update avic_kick_target_vcpus to support 32-bit APIC ID")
Cc: Li RongQing <lirongqing@baidu.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Li RongQing <lirongqing@baidu.com>
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Message-Id: <20230106011306.85230-13-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
18 months agoKVM: SVM: Replace "avic_mode" enum with "x2avic_enabled" boolean
Sean Christopherson [Fri, 6 Jan 2023 01:12:44 +0000 (01:12 +0000)]
KVM: SVM: Replace "avic_mode" enum with "x2avic_enabled" boolean

Replace the "avic_mode" enum with a single bool to track whether or not
x2AVIC is enabled.  KVM already has "apicv_enabled" that tracks if any
flavor of AVIC is enabled, i.e. AVIC_MODE_NONE and AVIC_MODE_X1 are
redundant and unnecessary noise.

No functional change intended.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Message-Id: <20230106011306.85230-12-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
18 months agoKVM: x86: Inhibit APIC memslot if x2APIC and AVIC are enabled
Sean Christopherson [Fri, 6 Jan 2023 01:12:43 +0000 (01:12 +0000)]
KVM: x86: Inhibit APIC memslot if x2APIC and AVIC are enabled

Free the APIC access page memslot if any vCPU enables x2APIC and SVM's
AVIC is enabled to prevent accesses to the virtual APIC on vCPUs with
x2APIC enabled.  On AMD, if its "hybrid" mode is enabled (AVIC is enabled
when x2APIC is enabled even without x2AVIC support), keeping the APIC
access page memslot results in the guest being able to access the virtual
APIC page as x2APIC is fully emulated by KVM.  I.e. hardware isn't aware
that the guest is operating in x2APIC mode.

Exempt nested SVM's update of APICv state from the new logic as x2APIC
can't be toggled on VM-Exit.  In practice, invoking the x2APIC logic
should be harmless precisely because it should be a glorified nop, but
play it safe to avoid latent bugs, e.g. with dropping the vCPU's SRCU
lock.

Intel doesn't suffer from the same issue as APICv has fully independent
VMCS controls for xAPIC vs. x2APIC virtualization.  Technically, KVM
should provide bus error semantics and not memory semantics for the APIC
page when x2APIC is enabled, but KVM already provides memory semantics in
other scenarios, e.g. if APICv/AVIC is enabled and the APIC is hardware
disabled (via APIC_BASE MSR).

Note, checking apic_access_memslot_enabled without taking locks relies
it being set during vCPU creation (before kvm_vcpu_reset()).  vCPUs can
race to set the inhibit and delete the memslot, i.e. can get false
positives, but can't get false negatives as apic_access_memslot_enabled
can't be toggled "on" once any vCPU reaches KVM_RUN.

Opportunistically drop the "can" while updating avic_activate_vmcb()'s
comment, i.e. to state that KVM _does_ support the hybrid mode.  Move
the "Note:" down a line to conform to preferred kernel/KVM multi-line
comment style.

Opportunistically update the apicv_update_lock comment, as it isn't
actually used to protect apic_access_memslot_enabled (which is protected
by slots_lock).

Fixes: 0e311d33bfbe ("KVM: SVM: Introduce hybrid-AVIC mode")
Signed-off-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Message-Id: <20230106011306.85230-11-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
18 months agoKVM: x86: Move APIC access page helper to common x86 code
Sean Christopherson [Fri, 6 Jan 2023 01:12:42 +0000 (01:12 +0000)]
KVM: x86: Move APIC access page helper to common x86 code

Move the APIC access page allocation helper function to common x86 code,
the allocation routine is virtually identical between APICv (VMX) and
AVIC (SVM).  Keep APICv's gfn_to_page() + put_page() sequence, which
verifies that a backing page can be allocated, i.e. that the system isn't
under heavy memory pressure.  Forcing the backing page to be populated
isn't strictly necessary, but skipping the effective prefetch only delays
the inevitable.

Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20230106011306.85230-10-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>