Paolo Bonzini [Mon, 3 Oct 2022 19:33:32 +0000 (15:33 -0400)]
Merge tag 'kvmarm-6.1' of git://git./linux/kernel/git/kvmarm/kvmarm into HEAD
KVM/arm64 updates for v6.1
- Fixes for single-stepping in the presence of an async
exception as well as the preservation of PSTATE.SS
- Better handling of AArch32 ID registers on AArch64-only
systems
- Fixes for the dirty-ring API, allowing it to work on
architectures with relaxed memory ordering
- Advertise the new kvmarm mailing list
- Various minor cleanups and spelling fixes
Marc Zyngier [Sat, 1 Oct 2022 09:19:36 +0000 (10:19 +0100)]
Merge branch kvm-arm64/misc-6.1 into kvmarm-master/next
* kvm-arm64/misc-6.1:
: .
: Misc KVM/arm64 fixes and improvement for v6.1
:
: - Simplify the affinity check when moving a GICv3 collection
:
: - Tone down the shouting when kvm-arm.mode=protected is passed
: to a guest
:
: - Fix various comments
:
: - Advertise the new kvmarm@lists.linux.dev and deprecate the
: old Columbia list
: .
KVM: arm64: Advertise new kvmarm mailing list
KVM: arm64: Fix comment typo in nvhe/switch.c
KVM: selftests: Update top-of-file comment in psci_test
KVM: arm64: Ignore kvm-arm.mode if !is_hyp_mode_available()
KVM: arm64: vgic: Remove duplicate check in update_affinity_collection()
Signed-off-by: Marc Zyngier <maz@kernel.org>
Marc Zyngier [Sat, 1 Oct 2022 09:19:29 +0000 (10:19 +0100)]
Merge branch kvm-arm64/dirty-log-ordered into kvmarm-master/next
* kvm-arm64/dirty-log-ordered:
: .
: Retrofit some ordering into the existing API dirty-ring by:
:
: - relying on acquire/release semantics which are the default on x86,
: but need to be explicit on arm64
:
: - adding a new capability that indicate which flavor is supported, either
: with explicit ordering (arm64) or both implicit and explicit (x86),
: as suggested by Paolo at KVM Forum
:
: - documenting the requirements for this new capability on weakly ordered
: architectures
:
: - updating the selftests to do the right thing
: .
KVM: selftests: dirty-log: Use KVM_CAP_DIRTY_LOG_RING_ACQ_REL if available
KVM: selftests: dirty-log: Upgrade flag accesses to acquire/release semantics
KVM: Document weakly ordered architecture requirements for dirty ring
KVM: x86: Select CONFIG_HAVE_KVM_DIRTY_RING_ACQ_REL
KVM: Add KVM_CAP_DIRTY_LOG_RING_ACQ_REL capability and config option
KVM: Use acquire/release semantics when accessing dirty ring GFN state
Signed-off-by: Marc Zyngier <maz@kernel.org>
Marc Zyngier [Sat, 1 Oct 2022 09:12:45 +0000 (10:12 +0100)]
KVM: arm64: Advertise new kvmarm mailing list
As announced on the kvmarm list, we're moving the mailing list over
to kvmarm@lists.linux.dev:
<quote>
As you probably all know, the kvmarm mailing has been hosted on
Columbia's machines for as long as the project existed (over 13
years). After all this time, the university has decided to retire the
list infrastructure and asked us to find a new hosting.
A new mailing list has been created on lists.linux.dev[1], and I'm
kindly asking everyone interested in following the KVM/arm64
developments to start subscribing to it (and start posting your
patches there). I hope that people will move over to it quickly enough
that we can soon give Columbia the green light to turn their systems
off.
Note that the new list will only get archived automatically once we
fully switch over, but I'll make sure we fill any gap and not lose any
message. In the meantime, please Cc both lists.
[...]
[1] https://subspace.kernel.org/lists.linux.dev.html
</quote>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20221001091245.3900668-1-maz@kernel.org
Peng Hao [Fri, 23 Sep 2022 15:03:03 +0000 (23:03 +0800)]
kvm: vmx: keep constant definition format consistent
Keep all constants using lowercase "x".
Signed-off-by: Peng Hao <flyingpeng@tencent.com>
Message-Id: <CAPm50aKnctFL_7fZ-eqrz-QGnjW2+DTyDDrhxi7UZVO3HjD8UA@mail.gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Peng Hao [Fri, 23 Sep 2022 15:05:36 +0000 (23:05 +0800)]
kvm: mmu: fix typos in struct kvm_arch
No 'kvmp_mmu_pages', it should be 'kvm_mmu_page'. And
struct kvm_mmu_pages and struct kvm_mmu_page are different structures,
here should be kvm_mmu_page.
kvm_mmu_pages is defined in arch/x86/kvm/mmu/mmu.c.
Suggested-by: David Matlack <dmatlack@google.com>
Signed-off-by: Peng Hao <flyingpeng@tencent.com>
Reviewed-by: David Matlack <dmatlack@google.com>
Message-Id: <CAPm50aL=0smbohhjAcK=ciUwcQJ=uAQP1xNQi52YsE7U8NFpEw@mail.gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Paolo Bonzini [Thu, 29 Sep 2022 17:25:13 +0000 (13:25 -0400)]
Merge tag 'kvm-x86-6.1-2' of https://github.com/sean-jc/linux into HEAD
KVM x86 updates for 6.1, batch #2:
- Misc PMU fixes and cleanups.
- Fixes for Hyper-V hypercall selftest
David Matlack [Thu, 29 Sep 2022 18:12:07 +0000 (11:12 -0700)]
KVM: selftests: Fix nx_huge_pages_test on TDP-disabled hosts
Map the test's huge page region with 2MiB virtual mappings when TDP is
disabled so that KVM can shadow the region with huge pages. This fixes
nx_huge_pages_test on hosts where TDP hardware support is disabled.
Purposely do not skip this test on TDP-disabled hosts. While we don't
care about NX Huge Pages on TDP-disabled hosts from a security
perspective, KVM does support it, and so we should test it.
For TDP-enabled hosts, continue mapping the region with 4KiB pages to
ensure that KVM can map it with huge pages irrespective of the guest
mappings.
Fixes: 8448ec5993be ("KVM: selftests: Add NX huge pages test")
Signed-off-by: David Matlack <dmatlack@google.com>
Message-Id: <
20220929181207.
2281449-4-dmatlack@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
David Matlack [Thu, 29 Sep 2022 18:12:06 +0000 (11:12 -0700)]
KVM: selftests: Add helpers to read kvm_{intel,amd} boolean module parameters
Add helper functions for reading the value of kvm_intel and kvm_amd
boolean module parameters. Use the kvm_intel variant in
vm_is_unrestricted_guest() to simplify the check for
kvm_intel.unrestricted_guest.
No functional change intended.
Signed-off-by: David Matlack <dmatlack@google.com>
Message-Id: <
20220929181207.
2281449-3-dmatlack@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
David Matlack [Thu, 29 Sep 2022 18:12:05 +0000 (11:12 -0700)]
KVM: selftests: Tell the compiler that code after TEST_FAIL() is unreachable
Add __builtin_unreachable() to TEST_FAIL() so that the compiler knows
that any code after a TEST_FAIL() is unreachable.
Signed-off-by: David Matlack <dmatlack@google.com>
Message-Id: <
20220929181207.
2281449-2-dmatlack@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Sean Christopherson [Wed, 28 Sep 2022 23:36:52 +0000 (23:36 +0000)]
Revert "KVM: selftests: Fix nested SVM tests when built with clang"
Revert back to using memset() in generic_svm_setup() now that KVM
selftests override memset() and friends specifically to prevent the
compiler from generating fancy code and/or linking to the libc
implementation.
This reverts commit
ed290e1c20da19fa100a3e0f421aa31b65984960.
Suggested-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <
20220928233652.783504-8-seanjc@google.com>
Reviewed-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Sean Christopherson [Wed, 28 Sep 2022 23:36:51 +0000 (23:36 +0000)]
KVM: selftests: Dedup subtests of fix_hypercall_test
Combine fix_hypercall_test's two subtests into a common routine, the only
difference between the two is whether or not the quirk is disabled.
Passing a boolean is a little gross, but using an enum to make it super
obvious that the callers are enabling/disabling the quirk seems like
overkill.
No functional change intended.
Signed-off-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Oliver Upton <oliver.upton@linux.dev>
Message-Id: <
20220928233652.783504-7-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Sean Christopherson [Wed, 28 Sep 2022 23:36:50 +0000 (23:36 +0000)]
KVM: selftests: Explicitly verify KVM doesn't patch hypercall if quirk==off
Explicitly verify that KVM doesn't patch in the native hypercall if the
FIX_HYPERCALL_INSN quirk is disabled. The test currently verifies that
a #UD occurred, but doesn't actually verify that no patching occurred.
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <
20220928233652.783504-6-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Sean Christopherson [Wed, 28 Sep 2022 23:36:49 +0000 (23:36 +0000)]
KVM: selftests: Hardcode VMCALL/VMMCALL opcodes in "fix hypercall" test
Hardcode the VMCALL/VMMCALL opcodes in dedicated arrays instead of
extracting the opcodes from inline asm, and patch in the "other" opcode
so as to preserve the original opcode, i.e. the opcode that the test
executes in the guest.
Preserving the original opcode (by not patching the source), will make
it easier to implement a check that KVM doesn't modify the opcode (the
test currently only verifies that a #UD occurred).
Use INT3 (0xcc) as the placeholder so that the guest will likely die a
horrible death if the test's patching goes awry.
As a bonus, patching from within the test dedups a decent chunk of code.
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <
20220928233652.783504-5-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Sean Christopherson [Wed, 28 Sep 2022 23:36:48 +0000 (23:36 +0000)]
KVM: selftests: Remove unnecessary register shuffling in fix_hypercall_test
Use input constraints to load RAX and RBX when testing that KVM correctly
does/doesn't patch the "wrong" hypercall. There's no need to manually
load RAX and RBX, and no reason to clobber them either (KVM is not
supposed to modify anything other than RAX).
Signed-off-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Oliver Upton <oliver.upton@linux.dev>
Message-Id: <
20220928233652.783504-4-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Sean Christopherson [Wed, 28 Sep 2022 23:36:47 +0000 (23:36 +0000)]
KVM: selftests: Compare insn opcodes directly in fix_hypercall_test
Directly compare the expected versus observed hypercall instructions when
verifying that KVM patched in the native hypercall (FIX_HYPERCALL_INSN
quirk enabled). gcc rightly complains that doing a 4-byte memcpy() with
an "unsigned char" as the source generates an out-of-bounds accesses.
Alternatively, "exp" and "obs" could be declared as 3-byte arrays, but
there's no known reason to copy locally instead of comparing directly.
In function ‘assert_hypercall_insn’,
inlined from ‘guest_main’ at x86_64/fix_hypercall_test.c:91:2:
x86_64/fix_hypercall_test.c:63:9: error: array subscript ‘unsigned int[0]’
is partly outside array bounds of ‘unsigned char[1]’ [-Werror=array-bounds]
63 | memcpy(&exp, exp_insn, sizeof(exp));
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
x86_64/fix_hypercall_test.c: In function ‘guest_main’:
x86_64/fix_hypercall_test.c:42:22: note: object ‘vmx_hypercall_insn’ of size 1
42 | extern unsigned char vmx_hypercall_insn;
| ^~~~~~~~~~~~~~~~~~
x86_64/fix_hypercall_test.c:25:22: note: object ‘svm_hypercall_insn’ of size 1
25 | extern unsigned char svm_hypercall_insn;
| ^~~~~~~~~~~~~~~~~~
In function ‘assert_hypercall_insn’,
inlined from ‘guest_main’ at x86_64/fix_hypercall_test.c:91:2:
x86_64/fix_hypercall_test.c:64:9: error: array subscript ‘unsigned int[0]’
is partly outside array bounds of ‘unsigned char[1]’ [-Werror=array-bounds]
64 | memcpy(&obs, obs_insn, sizeof(obs));
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
x86_64/fix_hypercall_test.c: In function ‘guest_main’:
x86_64/fix_hypercall_test.c:25:22: note: object ‘svm_hypercall_insn’ of size 1
25 | extern unsigned char svm_hypercall_insn;
| ^~~~~~~~~~~~~~~~~~
x86_64/fix_hypercall_test.c:42:22: note: object ‘vmx_hypercall_insn’ of size 1
42 | extern unsigned char vmx_hypercall_insn;
| ^~~~~~~~~~~~~~~~~~
cc1: all warnings being treated as errors
make: *** [../lib.mk:135: tools/testing/selftests/kvm/x86_64/fix_hypercall_test] Error 1
Fixes: 6c2fa8b20d0c ("selftests: KVM: Test KVM_X86_QUIRK_FIX_HYPERCALL_INSN")
Cc: Oliver Upton <oliver.upton@linux.dev>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Oliver Upton <oliver.upton@linux.dev>
Message-Id: <
20220928233652.783504-3-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Sean Christopherson [Wed, 28 Sep 2022 23:36:46 +0000 (23:36 +0000)]
KVM: selftests: Implement memcmp(), memcpy(), and memset() for guest use
Implement memcmp(), memcpy(), and memset() to override the compiler's
built-in versions in order to guarantee that the compiler won't generate
out-of-line calls to external functions via the PLT. This allows the
helpers to be safely used in guest code, as KVM selftests don't support
dynamic loading of guest code.
Steal the implementations from the kernel's generic versions, sans the
optimizations in memcmp() for unaligned accesses.
Put the utilities in a separate compilation unit and build with
-ffreestanding to fudge around a gcc "feature" where it will optimize
memset(), memcpy(), etc... by generating a recursive call. I.e. the
compiler optimizes itself into infinite recursion. Alternatively, the
individual functions could be tagged with
optimize("no-tree-loop-distribute-patterns"), but using "optimize" for
anything but debug is discouraged, and Linus NAK'd the use of the flag
in the kernel proper[*].
https://lore.kernel.org/lkml/CAHk-=wik-oXnUpfZ6Hw37uLykc-_P0Apyn2XuX-odh-3Nzop8w@mail.gmail.com
Cc: Andrew Jones <andrew.jones@linux.dev>
Cc: Anup Patel <anup@brainfault.org>
Cc: Atish Patra <atishp@atishpatra.org>
Cc: Christian Borntraeger <borntraeger@linux.ibm.com>
Cc: Janosch Frank <frankja@linux.ibm.com>
Cc: Claudio Imbrenda <imbrenda@linux.ibm.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <
20220928233652.783504-2-seanjc@google.com>
Reviewed-by: Andrew Jones <andrew.jones@linux.dev>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Jim Mattson [Thu, 22 Sep 2022 23:18:54 +0000 (16:18 -0700)]
KVM: x86: Hide IA32_PLATFORM_DCA_CAP[31:0] from the guest
The only thing reported by CPUID.9 is the value of
IA32_PLATFORM_DCA_CAP[31:0] in EAX. This MSR doesn't even exist in the
guest, since CPUID.1:ECX.DCA[bit 18] is clear in the guest.
Clear CPUID.9 in KVM_GET_SUPPORTED_CPUID.
Fixes: 24c82e576b78 ("KVM: Sanitize cpuid")
Signed-off-by: Jim Mattson <jmattson@google.com>
Message-Id: <
20220922231854.249383-1-jmattson@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
David Matlack [Thu, 22 Sep 2022 23:17:23 +0000 (16:17 -0700)]
KVM: selftests: Gracefully handle empty stack traces
Bail out of test_dump_stack() if the stack trace is empty rather than
invoking addr2line with zero addresses. The problem with the latter is
that addr2line will block waiting for addresses to be passed in via
stdin, e.g. if running a selftest from an interactive terminal.
Opportunistically fix up the comment that mentions skipping 3 frames
since only 2 are skipped in the code.
Cc: Vipin Sharma <vipinsh@google.com>
Cc: Sean Christopherson <seanjc@google.com>
Signed-off-by: David Matlack <dmatlack@google.com>
Message-Id: <
20220922231724.
3560211-1-dmatlack@google.com>
[Small tweak to keep backtrace() call close to if(). - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Emanuele Giuseppe Esposito [Mon, 26 Sep 2022 08:29:23 +0000 (04:29 -0400)]
KVM: selftests: replace assertion with warning in access_tracking_perf_test
Page_idle uses {ptep/pmdp}_clear_young_notify which in turn calls
the mmu notifier callback ->clear_young(), which purposefully
does not flush the TLB.
When running the test in a nested guest, point 1. of the test
doc header is violated, because KVM TLB is unbounded by size
and since no flush is forced, KVM does not update the sptes
accessed/idle bits resulting in guest assertion failure.
More precisely, only the first ACCESS_WRITE in run_test() actually
makes visible changes, because sptes are created and the accessed
bit is set to 1 (or idle bit is 0). Then the first mark_memory_idle()
passes since access bit is still one, and sets all pages as idle
(or not accessed). When the next write is performed, the update
is not flushed therefore idle is still 1 and next mark_memory_idle()
fails.
Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Message-Id: <
20220926082923.299554-1-eesposit@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Marc Zyngier [Mon, 26 Sep 2022 14:51:20 +0000 (15:51 +0100)]
KVM: selftests: dirty-log: Use KVM_CAP_DIRTY_LOG_RING_ACQ_REL if available
Pick KVM_CAP_DIRTY_LOG_RING_ACQ_REL if exposed by the kernel.
Signed-off-by: Marc Zyngier <maz@kernel.org>
Reviewed-by: Gavin Shan <gshan@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Link: https://lore.kernel.org/r/20220926145120.27974-7-maz@kernel.org
Marc Zyngier [Mon, 26 Sep 2022 14:51:19 +0000 (15:51 +0100)]
KVM: selftests: dirty-log: Upgrade flag accesses to acquire/release semantics
In order to preserve ordering, make sure that the flag accesses
in the dirty log are done using acquire/release accessors.
Signed-off-by: Marc Zyngier <maz@kernel.org>
Reviewed-by: Gavin Shan <gshan@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Link: https://lore.kernel.org/r/20220926145120.27974-6-maz@kernel.org
Marc Zyngier [Mon, 26 Sep 2022 14:51:18 +0000 (15:51 +0100)]
KVM: Document weakly ordered architecture requirements for dirty ring
Now that the kernel can expose to userspace that its dirty ring
management relies on explicit ordering, document these new requirements
for VMMs to do the right thing.
Signed-off-by: Marc Zyngier <maz@kernel.org>
Reviewed-by: Gavin Shan <gshan@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Link: https://lore.kernel.org/r/20220926145120.27974-5-maz@kernel.org
Marc Zyngier [Mon, 26 Sep 2022 14:51:17 +0000 (15:51 +0100)]
KVM: x86: Select CONFIG_HAVE_KVM_DIRTY_RING_ACQ_REL
Since x86 is TSO (give or take), allow it to advertise the new
ACQ_REL version of the dirty ring capability. No other change is
required for it.
Signed-off-by: Marc Zyngier <maz@kernel.org>
Reviewed-by: Gavin Shan <gshan@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Link: https://lore.kernel.org/r/20220926145120.27974-4-maz@kernel.org
Marc Zyngier [Mon, 26 Sep 2022 14:51:16 +0000 (15:51 +0100)]
KVM: Add KVM_CAP_DIRTY_LOG_RING_ACQ_REL capability and config option
In order to differenciate between architectures that require no extra
synchronisation when accessing the dirty ring and those who do,
add a new capability (KVM_CAP_DIRTY_LOG_RING_ACQ_REL) that identify
the latter sort. TSO architectures can obviously advertise both, while
relaxed architectures must only advertise the ACQ_REL version.
This requires some configuration symbol rejigging, with HAVE_KVM_DIRTY_RING
being only indirectly selected by two top-level config symbols:
- HAVE_KVM_DIRTY_RING_TSO for strongly ordered architectures (x86)
- HAVE_KVM_DIRTY_RING_ACQ_REL for weakly ordered architectures (arm64)
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Reviewed-by: Gavin Shan <gshan@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Link: https://lore.kernel.org/r/20220926145120.27974-3-maz@kernel.org
Marc Zyngier [Mon, 26 Sep 2022 14:51:15 +0000 (15:51 +0100)]
KVM: Use acquire/release semantics when accessing dirty ring GFN state
The current implementation of the dirty ring has an implicit requirement
that stores to the dirty ring from userspace must be:
- be ordered with one another
- visible from another CPU executing a ring reset
While these implicit requirements work well for x86 (and any other
TSO-like architecture), they do not work for more relaxed architectures
such as arm64 where stores to different addresses can be freely
reordered, and loads from these addresses not observing writes from
another CPU unless the required barriers (or acquire/release semantics)
are used.
In order to start fixing this, upgrade the ring reset accesses:
- the kvm_dirty_gfn_harvested() helper now uses acquire semantics
so it is ordered after all previous writes, including that from
userspace
- the kvm_dirty_gfn_set_invalid() helper now uses release semantics
so that the next_slot and next_offset reads don't drift past
the entry invalidation
This is only a partial fix as the userspace side also need upgrading.
Signed-off-by: Marc Zyngier <maz@kernel.org>
Reviewed-by: Gavin Shan <gshan@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Link: https://lore.kernel.org/r/20220926145120.27974-2-maz@kernel.org
Wei-Lin Chang [Thu, 29 Sep 2022 04:28:39 +0000 (12:28 +0800)]
KVM: arm64: Fix comment typo in nvhe/switch.c
Fix the comment of __hyp_vgic_restore_state() from saying VEH to VHE,
also change the underscore to a dash to match the comment
above __hyp_vgic_save_state().
Signed-off-by: Wei-Lin Chang <r09922117@csie.ntu.edu.tw>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20220929042839.24277-1-r09922117@csie.ntu.edu.tw
Like Xu [Wed, 31 Aug 2022 08:53:28 +0000 (16:53 +0800)]
KVM: x86/svm/pmu: Rewrite get_gp_pmc_amd() for more counters scalability
If the number of AMD gp counters continues to grow, the code will
be very clumsy and the switch-case design of inline get_gp_pmc_amd()
will also bloat the kernel text size.
The target code is taught to manage two groups of MSRs, each
representing a different version of the AMD PMU counter MSRs.
The MSR addresses of each group are contiguous, with no holes,
and there is no intersection between two sets of addresses,
but they are discrete in functionality by design like this:
[Group A : All counter MSRs are tightly bound to all event select MSRs ]
MSR_K7_EVNTSEL0 0xc0010000
MSR_K7_EVNTSELi 0xc0010000 + i
...
MSR_K7_EVNTSEL3 0xc0010003
MSR_K7_PERFCTR0 0xc0010004
MSR_K7_PERFCTRi 0xc0010004 + i
...
MSR_K7_PERFCTR3 0xc0010007
[Group B : The counter MSRs are interleaved with the event select MSRs ]
MSR_F15H_PERF_CTL0 0xc0010200
MSR_F15H_PERF_CTR0 (0xc0010200 + 1)
...
MSR_F15H_PERF_CTLi (0xc0010200 + 2 * i)
MSR_F15H_PERF_CTRi (0xc0010200 + 2 * i + 1)
...
MSR_F15H_PERF_CTL5 (0xc0010200 + 2 * 5)
MSR_F15H_PERF_CTR5 (0xc0010200 + 2 * 5 + 1)
Rewrite get_gp_pmc_amd() in this way: first determine which group of
registers is accessed, then determine if it matches its requested type,
applying different scaling ratios respectively, and finally get pmc_idx
to pass into amd_pmc_idx_to_pmc().
Signed-off-by: Like Xu <likexu@tencent.com>
Link: https://lore.kernel.org/r/20220831085328.45489-8-likexu@tencent.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Like Xu [Wed, 31 Aug 2022 08:53:27 +0000 (16:53 +0800)]
KVM: x86/svm/pmu: Direct access pmu->gp_counter[] to implement amd_*_to_pmc()
Access PMU counters on AMD by directly indexing the array of general
purpose counters instead of translating the PMC index to an MSR index.
AMD only supports gp counters, there's no need to translate a PMC index
to an MSR index and back to a PMC index.
Opportunistically apply array_index_nospec() to reduce the attack
surface for speculative execution and remove the dead code.
Signed-off-by: Like Xu <likexu@tencent.com>
Link: https://lore.kernel.org/r/20220831085328.45489-7-likexu@tencent.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Like Xu [Wed, 31 Aug 2022 08:53:24 +0000 (16:53 +0800)]
KVM: x86/pmu: Avoid using PEBS perf_events for normal counters
The check logic in the pmc_resume_counter() to determine whether
a perf_event is reusable is partial and flawed, especially when it
comes to a pseudocode sequence (contrived, but valid) like:
- enabling a counter and its PEBS bit
- enable global_ctrl
- run workload
- disable only the PEBS bit, leaving the global_ctrl bit enabled
In this corner case, a perf_event created for PEBS can be reused by
a normal counter before it has been released and recreated, and when this
normal counter overflows, it triggers a PEBS interrupt (precise_ip != 0).
To address this issue, reprogram all affected counters when PEBS_ENABLE
change and reuse a counter if and only if PEBS exactly matches precise.
Fixes: 79f3e3b58386 ("KVM: x86/pmu: Reprogram PEBS event to emulate guest PEBS counter")
Signed-off-by: Like Xu <likexu@tencent.com>
Link: https://lore.kernel.org/r/20220831085328.45489-4-likexu@tencent.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Like Xu [Thu, 22 Sep 2022 20:40:38 +0000 (13:40 -0700)]
KVM: x86/pmu: Refactor PERF_GLOBAL_CTRL update helper for reuse by PEBS
Extract the "global ctrl" specific bits out of global_ctrl_changed() so
that the helper only deals with reprogramming general purpose counters,
and rename the helper accordingly. PEBS needs the same logic, i.e needs
to reprogram counters associated when PEBS_ENABLE bits are toggled, and
will use the helper in a future fix.
No functional change intended.
Signed-off-by: Like Xu <likexu@tencent.com>
Link: https://lore.kernel.org/r/20220831085328.45489-4-likexu@tencent.com
[sean: split to separate patch, write changelog]
Signed-off-by: Sean Christopherson <seanjc@google.com>
Like Xu [Wed, 31 Aug 2022 08:53:23 +0000 (16:53 +0800)]
KVM: x86/pmu: Don't generate PEBS records for emulated instructions
KVM will accumulate an enabled counter for at least INSTRUCTIONS or
BRANCH_INSTRUCTION hw event from any KVM emulated instructions,
generating emulated overflow interrupt on counter overflow, which
in theory should also happen when the PEBS counter overflows but
it currently lacks this part of the underlying support (e.g. through
software injection of records in the irq context or a lazy approach).
In this case, KVM skips the injection of this BUFFER_OVF PMI (effectively
dropping one PEBS record) and let the overflow counter move on. The loss
of a single sample does not introduce a loss of accuracy, but is easily
noticeable for certain specific instructions.
This issue is expected to be addressed along with the issue
of PEBS cross-mapped counters with a slow-path proposal.
Fixes: 79f3e3b58386 ("KVM: x86/pmu: Reprogram PEBS event to emulate guest PEBS counter")
Signed-off-by: Like Xu <likexu@tencent.com>
Link: https://lore.kernel.org/r/20220831085328.45489-3-likexu@tencent.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Like Xu [Wed, 31 Aug 2022 08:53:22 +0000 (16:53 +0800)]
KVM: x86/pmu: Avoid setting BIT_ULL(-1) to pmu->host_cross_mapped_mask
In the extreme case of host counters multiplexing and contention, the
perf_event requested by the guest's pebs counter is not allocated to any
actual physical counter, in which case hw.idx is bookkept as -1,
resulting in an out-of-bounds access to host_cross_mapped_mask.
Fixes: 854250329c02 ("KVM: x86/pmu: Disable guest PEBS temporarily in two rare situations")
Signed-off-by: Like Xu <likexu@tencent.com>
Link: https://lore.kernel.org/r/20220831085328.45489-2-likexu@tencent.com
[sean: expand comment to explain how a negative idx can be encountered]
Signed-off-by: Sean Christopherson <seanjc@google.com>
Vitaly Kuznetsov [Thu, 22 Sep 2022 08:39:41 +0000 (10:39 +0200)]
KVM: selftests: Don't set reserved bits for invalid Hyper-V hypercall number
Bits 27 through 31 in Hyper-V hypercall 'control' are reserved (see
HV_HYPERCALL_RSVD0_MASK) but '0xdeadbeef' includes them. This causes
KVM to return HV_STATUS_INVALID_HYPERCALL_INPUT instead of the expected
HV_STATUS_INVALID_HYPERCALL_CODE.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Link: https://lore.kernel.org/all/87fsgjol20.fsf@redhat.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Vipin Sharma [Thu, 22 Sep 2022 06:24:51 +0000 (23:24 -0700)]
KVM: selftests: Load RAX with -EFAULT before Hyper-V hypercall
Load RAX with -EFAULT prior to making a Hyper-V hypercall so that tests
can't get false negatives due to the compiler coincidentally loading the
"right" value into RAX, i.e. to ensure that _KVM_ and not the compiler
is correctly clearing RAX on a successful hypercall.
Note, initializing *hv_status (in C code) to -EFAULT is not sufficient
to avoid false negatives, as the compiler can still "clobber" RAX and
thus load garbage into *hv_status if the hypercall faults (or if KVM
doesn't set RAX).
Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Vipin Sharma <vipinsh@google.com>
Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Link: https://lore.kernel.org/r/20220922062451.2927010-1-vipinsh@google.com
[sean: move to separate patch, massage changelog]
Signed-off-by: Sean Christopherson <seanjc@google.com>
Vipin Sharma [Thu, 22 Sep 2022 06:24:51 +0000 (23:24 -0700)]
KVM: selftests: Check result in hyperv_features for successful hypercalls
Commit
cc5851c6be86 ("KVM: selftests: Use exception fixup for #UD/#GP
Hyper-V MSR/hcall tests") introduced a wrong guest assert in guest_hcall().
It is not checking the successful hypercall results and only checks the
result when a fault happens.
GUEST_ASSERT_2(!hcall->ud_expected || res == hcall->expect,
hcall->expect, res);
Correct the assertion by only checking results of the successful
hypercalls.
This issue was observed when this test started failing after building it
in Clang. Above guest assert statement fails because "res" is not equal
to "hcall->expect" when "hcall->ud_expected" is true. "res" gets some
garbage value in Clang from the RAX register. In GCC, RAX is 0 because
it using RAX for @output_address in the asm statement and resetting it
to 0 before using it as output operand in the same asm statement. Clang
is not using RAX for @output_address.
Fixes: cc5851c6be86 ("KVM: selftests: Use exception fixup for #UD/#GP Hyper-V MSR/hcall tests")
Signed-off-by: Vipin Sharma <vipinsh@google.com>
Suggested-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Jim Mattson <jmattson@google.com>
Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Link: https://lore.kernel.org/r/20220922062451.2927010-1-vipinsh@google.com
[sean: wrap changelog at ~75 chars, move -EFAULT change to separate patch]
Signed-off-by: Sean Christopherson <seanjc@google.com>
Oliver Upton [Fri, 19 Aug 2022 16:21:00 +0000 (16:21 +0000)]
KVM: selftests: Update top-of-file comment in psci_test
Fix the comment to accurately describe the test and recently added
SYSTEM_SUSPEND test case.
What was once psci_cpu_on_test was renamed and extended to squeeze in a
test case for PSCI SYSTEM_SUSPEND. Nonetheless, the author of those
changes (whoever they may be...) failed to update the file comment to
reflect what had changed.
Reported-by: Reiji Watanabe <reijiw@google.com>
Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20220819162100.213854-1-oliver.upton@linux.dev
David Matlack [Mon, 26 Sep 2022 17:14:57 +0000 (10:14 -0700)]
KVM: selftests: Skip tests that require EPT when it is not available
Skip selftests that require EPT support in the VM when it is not
available. For example, if running on a machine where kvm_intel.ept=N
since KVM does not offer EPT support to guests if EPT is not supported
on the host.
This commit causes vmx_dirty_log_test to be skipped instead of failing
on hosts where kvm_intel.ept=N.
Signed-off-by: David Matlack <dmatlack@google.com>
Message-Id: <
20220926171457.532542-1-dmatlack@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Paolo Bonzini [Wed, 21 Sep 2022 00:32:01 +0000 (00:32 +0000)]
KVM: remove KVM_REQ_UNHALT
KVM_REQ_UNHALT is now unnecessary because it is replaced by the return
value of kvm_vcpu_block/kvm_vcpu_halt. Remove it.
No functional change intended.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Acked-by: Marc Zyngier <maz@kernel.org>
Message-Id: <
20220921003201.
1441511-13-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Paolo Bonzini [Wed, 21 Sep 2022 00:32:00 +0000 (00:32 +0000)]
KVM: mips, x86: do not rely on KVM_REQ_UNHALT
KVM_REQ_UNHALT is a weird request that simply reports the value of
kvm_arch_vcpu_runnable() on exit from kvm_vcpu_halt(). Only
MIPS and x86 are looking at it, the others just clear it. Check
the state of the vCPU directly so that the request is handled
as a nop on all architectures.
No functional change intended, except for corner cases where an
event arrive immediately after a signal become pending or after
another similar host-side event.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <
20220921003201.
1441511-12-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Paolo Bonzini [Wed, 21 Sep 2022 00:31:59 +0000 (00:31 +0000)]
KVM: x86: never write to memory from kvm_vcpu_check_block()
kvm_vcpu_check_block() is called while not in TASK_RUNNING, and therefore
it cannot sleep. Writing to guest memory is therefore forbidden, but it
can happen on AMD processors if kvm_check_nested_events() causes a vmexit.
Fortunately, all events that are caught by kvm_check_nested_events() are
also recognized by kvm_vcpu_has_events() through vendor callbacks such as
kvm_x86_interrupt_allowed() or kvm_x86_ops.nested_ops->has_events(), so
remove the call and postpone the actual processing to vcpu_block().
Opportunistically honor the return of kvm_check_nested_events(). KVM
punted on the check in kvm_vcpu_running() because the only error path is
if vmx_complete_nested_posted_interrupt() fails, in which case KVM exits
to userspace with "internal error" i.e. the VM is likely dead anyways so
it wasn't worth overloading the return of kvm_vcpu_running().
Add the check mostly so that KVM is consistent with itself; the return of
the call via kvm_apic_accept_events()=>kvm_check_nested_events() that
immediately follows _is_ checked.
Reported-by: Maxim Levitsky <mlevitsk@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
[sean: check and handle return of kvm_check_nested_events()]
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <
20220921003201.
1441511-11-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Sean Christopherson [Wed, 21 Sep 2022 00:31:58 +0000 (00:31 +0000)]
KVM: x86: Don't snapshot pending INIT/SIPI prior to checking nested events
Don't snapshot pending INIT/SIPI events prior to checking nested events,
architecturally there's nothing wrong with KVM processing (dropping) a
SIPI that is received immediately after synthesizing a VM-Exit. Taking
and consuming the snapshot makes the flow way more subtle than it needs
to be, e.g. nVMX consumes/clears events that trigger VM-Exit (INIT/SIPI),
and so at first glance it appears that KVM is double-dipping on pending
INITs and SIPIs. But that's not the case because INIT is blocked
unconditionally in VMX root mode the CPU cannot be in wait-for_SIPI after
VM-Exit, i.e. the paths that truly consume the snapshot are unreachable
if apic->pending_events is modified by kvm_check_nested_events().
nSVM is a similar story as GIF is cleared by the CPU on VM-Exit; INIT is
blocked regardless of whether or not it was pending prior to VM-Exit.
Drop the snapshot logic so that a future fix doesn't create weirdness
when kvm_vcpu_running()'s call to kvm_check_nested_events() is moved to
vcpu_block(). In that case, kvm_check_nested_events() will be called
immediately before kvm_apic_accept_events(), which raises the obvious
question of why that change doesn't break the snapshot logic.
Note, there is a subtle functional change. Previously, KVM would clear
pending SIPIs if and only SIPI was pending prior to VM-Exit, whereas now
KVM clears pending SIPI unconditionally if INIT+SIPI are blocked. The
latter is architecturally allowed, as SIPI is ignored if the CPU is not
in wait-for-SIPI mode (arguably, KVM should be even more aggressive in
dropping SIPIs). It is software's responsibility to ensure the SIPI is
delivered, i.e. software shouldn't be firing INIT-SIPI at a CPU until
it knows with 100% certaining that the target CPU isn't in VMX root mode.
Furthermore, the existing code is extra weird as SIPIs that arrive after
VM-Exit _are_ dropped if there also happened to be a pending SIPI before
VM-Exit.
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <
20220921003201.
1441511-10-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Sean Christopherson [Wed, 21 Sep 2022 00:31:57 +0000 (00:31 +0000)]
KVM: nVMX: Make event request on VMXOFF iff INIT/SIPI is pending
Explicitly check for a pending INIT/SIPI event when emulating VMXOFF
instead of blindly making an event request. There's obviously no need
to evaluate events if none are pending.
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <
20220921003201.
1441511-9-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Sean Christopherson [Wed, 21 Sep 2022 00:31:56 +0000 (00:31 +0000)]
KVM: nVMX: Make an event request if INIT or SIPI is pending on VM-Enter
Evaluate interrupts, i.e. set KVM_REQ_EVENT, if INIT or SIPI is pending
when emulating nested VM-Enter. INIT is blocked while the CPU is in VMX
root mode, but not in VMX non-root, i.e. becomes unblocked on VM-Enter.
This bug has been masked by KVM calling ->check_nested_events() in the
core run loop, but that hack will be fixed in the near future.
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <
20220921003201.
1441511-8-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Sean Christopherson [Wed, 21 Sep 2022 00:31:55 +0000 (00:31 +0000)]
KVM: SVM: Make an event request if INIT or SIPI is pending when GIF is set
Set KVM_REQ_EVENT if INIT or SIPI is pending when the guest enables GIF.
INIT in particular is blocked when GIF=0 and needs to be processed when
GIF is toggled to '1'. This bug has been masked by (a) KVM calling
->check_nested_events() in the core run loop and (b) hypervisors toggling
GIF from 0=>1 only when entering guest mode (L1 entering L2).
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <
20220921003201.
1441511-7-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Paolo Bonzini [Wed, 21 Sep 2022 00:31:54 +0000 (00:31 +0000)]
KVM: x86: lapic does not have to process INIT if it is blocked
Do not return true from kvm_vcpu_has_events() if the vCPU isn' going to
immediately process a pending INIT/SIPI. INIT/SIPI shouldn't be treated
as wake events if they are blocked.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
[sean: rebase onto refactored INIT/SIPI helpers, massage changelog]
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <
20220921003201.
1441511-6-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Sean Christopherson [Wed, 21 Sep 2022 00:31:53 +0000 (00:31 +0000)]
KVM: x86: Rename kvm_apic_has_events() to make it INIT/SIPI specific
Rename kvm_apic_has_events() to kvm_apic_has_pending_init_or_sipi() so
that it's more obvious that "events" really just means "INIT or SIPI".
Opportunistically clean up a weirdly worded comment that referenced
kvm_apic_has_events() instead of kvm_apic_accept_events().
No functional change intended.
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <
20220921003201.
1441511-5-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Sean Christopherson [Wed, 21 Sep 2022 00:31:52 +0000 (00:31 +0000)]
KVM: x86: Rename and expose helper to detect if INIT/SIPI are allowed
Rename and invert kvm_vcpu_latch_init() to kvm_apic_init_sipi_allowed()
so as to match the behavior of {interrupt,nmi,smi}_allowed(), and expose
the helper so that it can be used by kvm_vcpu_has_events() to determine
whether or not an INIT or SIPI is pending _and_ can be taken immediately.
Opportunistically replaced usage of the "latch" terminology with "blocked"
and/or "allowed", again to align with KVM's terminology used for all other
event types.
No functional change intended.
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <
20220921003201.
1441511-4-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Sean Christopherson [Wed, 21 Sep 2022 00:31:51 +0000 (00:31 +0000)]
KVM: nVMX: Make an event request when pending an MTF nested VM-Exit
Set KVM_REQ_EVENT when MTF becomes pending to ensure that KVM will run
through inject_pending_event() and thus vmx_check_nested_events() prior
to re-entering the guest.
MTF currently works by virtue of KVM's hack that calls
kvm_check_nested_events() from kvm_vcpu_running(), but that hack will
be removed in the near future. Until that call is removed, the patch
introduces no real functional change.
Fixes: 5ef8acbdd687 ("KVM: nVMX: Emulate MTF when performing instruction emulation")
Cc: stable@vger.kernel.org
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <
20220921003201.
1441511-3-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Paolo Bonzini [Wed, 21 Sep 2022 00:31:50 +0000 (00:31 +0000)]
KVM: x86: make vendor code check for all nested events
Interrupts, NMIs etc. sent while in guest mode are already handled
properly by the *_interrupt_allowed callbacks, but other events can
cause a vCPU to be runnable that are specific to guest mode.
In the case of VMX there are two, the preemption timer and the
monitor trap. The VMX preemption timer is already special cased via
the hv_timer_pending callback, but the purpose of the callback can be
easily extended to MTF or in fact any other event that can occur only
in guest mode.
Rename the callback and add an MTF check; kvm_arch_vcpu_runnable()
now can return true if an MTF is pending, without relying on
kvm_vcpu_running()'s call to kvm_check_nested_events(). Until that call
is removed, however, the patch introduces no functional change.
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <
20220921003201.
1441511-2-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Oliver Upton [Fri, 19 Aug 2022 19:01:58 +0000 (19:01 +0000)]
mailmap: Update Oliver's email address
While I'm still at Google, I've since switched to a linux.dev account
for working upstream.
Add an alias to the new address.
Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
Message-Id: <
20220819190158.234290-1-oliver.upton@linux.dev>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Sean Christopherson [Tue, 30 Aug 2022 23:16:14 +0000 (23:16 +0000)]
KVM: x86: Allow force_emulation_prefix to be written without a reload
Allow force_emulation_prefix to be written by privileged userspace
without reloading KVM. The param does not have any persistent affects
and is trivial to snapshot.
Signed-off-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20220830231614.3580124-28-seanjc@google.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Sean Christopherson [Tue, 30 Aug 2022 23:16:13 +0000 (23:16 +0000)]
KVM: selftests: Add an x86-only test to verify nested exception queueing
Add a test to verify that KVM_{G,S}ET_EVENTS play nice with pending vs.
injected exceptions when an exception is being queued for L2, and that
KVM correctly handles L1's exception intercept wants.
Signed-off-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Link: https://lore.kernel.org/r/20220830231614.3580124-27-seanjc@google.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Sean Christopherson [Tue, 30 Aug 2022 23:16:12 +0000 (23:16 +0000)]
KVM: selftests: Use uapi header to get VMX and SVM exit reasons/codes
Include the vmx.h and svm.h uapi headers that KVM so kindly provides
instead of manually defining all the same exit reasons/code.
Signed-off-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Link: https://lore.kernel.org/r/20220830231614.3580124-26-seanjc@google.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Sean Christopherson [Tue, 30 Aug 2022 23:16:11 +0000 (23:16 +0000)]
KVM: x86: Rename inject_pending_events() to kvm_check_and_inject_events()
Rename inject_pending_events() to kvm_check_and_inject_events() in order
to capture the fact that it handles more than just pending events, and to
(mostly) align with kvm_check_nested_events(), which omits the "inject"
for brevity.
Add a comment above kvm_check_and_inject_events() to provide a high-level
synopsis, and to document a virtualization hole (KVM erratum if you will)
that exists due to KVM not strictly tracking instruction boundaries with
respect to coincident instruction restarts and asynchronous events.
No functional change inteded.
Signed-off-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Link: https://lore.kernel.org/r/20220830231614.3580124-25-seanjc@google.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Sean Christopherson [Tue, 30 Aug 2022 23:16:10 +0000 (23:16 +0000)]
KVM: VMX: Update MTF and ICEBP comments to document KVM's subtle behavior
Document the oddities of ICEBP interception (trap-like #DB is intercepted
as a fault-like exception), and how using VMX's inner "skip" helper
deliberately bypasses the pending MTF and single-step #DB logic.
No functional change intended.
Signed-off-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Link: https://lore.kernel.org/r/20220830231614.3580124-24-seanjc@google.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Sean Christopherson [Tue, 30 Aug 2022 23:16:09 +0000 (23:16 +0000)]
KVM: x86: Treat pending TRIPLE_FAULT requests as pending exceptions
Treat pending TRIPLE_FAULTS as pending exceptions. A triple fault is an
exception for all intents and purposes, it's just not tracked as such
because there's no vector associated the exception. E.g. if userspace
were to set vcpu->request_interrupt_window while running L2 and L2 hit a
triple fault, a triple fault nested VM-Exit should be synthesized to L1
before exiting to userspace with KVM_EXIT_IRQ_WINDOW_OPEN.
Link: https://lore.kernel.org/all/YoVHAIGcFgJit1qp@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Link: https://lore.kernel.org/r/20220830231614.3580124-23-seanjc@google.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Sean Christopherson [Tue, 30 Aug 2022 23:16:08 +0000 (23:16 +0000)]
KVM: x86: Morph pending exceptions to pending VM-Exits at queue time
Morph pending exceptions to pending VM-Exits (due to interception) when
the exception is queued instead of waiting until nested events are
checked at VM-Entry. This fixes a longstanding bug where KVM fails to
handle an exception that occurs during delivery of a previous exception,
KVM (L0) and L1 both want to intercept the exception (e.g. #PF for shadow
paging), and KVM determines that the exception is in the guest's domain,
i.e. queues the new exception for L2. Deferring the interception check
causes KVM to esclate various combinations of injected+pending exceptions
to double fault (#DF) without consulting L1's interception desires, and
ends up injecting a spurious #DF into L2.
KVM has fudged around the issue for #PF by special casing emulated #PF
injection for shadow paging, but the underlying issue is not unique to
shadow paging in L0, e.g. if KVM is intercepting #PF because the guest
has a smaller maxphyaddr and L1 (but not L0) is using shadow paging.
Other exceptions are affected as well, e.g. if KVM is intercepting #GP
for one of SVM's workaround or for the VMware backdoor emulation stuff.
The other cases have gone unnoticed because the #DF is spurious if and
only if L1 resolves the exception, e.g. KVM's goofs go unnoticed if L1
would have injected #DF anyways.
The hack-a-fix has also led to ugly code, e.g. bailing from the emulator
if #PF injection forced a nested VM-Exit and the emulator finds itself
back in L1. Allowing for direct-to-VM-Exit queueing also neatly solves
the async #PF in L2 mess; no need to set a magic flag and token, simply
queue a #PF nested VM-Exit.
Deal with event migration by flagging that a pending exception was queued
by userspace and check for interception at the next KVM_RUN, e.g. so that
KVM does the right thing regardless of the order in which userspace
restores nested state vs. event state.
When "getting" events from userspace, simply drop any pending excpetion
that is destined to be intercepted if there is also an injected exception
to be migrated. Ideally, KVM would migrate both events, but that would
require new ABI, and practically speaking losing the event is unlikely to
be noticed, let alone fatal. The injected exception is captured, RIP
still points at the original faulting instruction, etc... So either the
injection on the target will trigger the same intercepted exception, or
the source of the intercepted exception was transient and/or
non-deterministic, thus dropping it is ok-ish.
Fixes: a04aead144fd ("KVM: nSVM: fix running nested guests when npt=0")
Fixes: feaf0c7dc473 ("KVM: nVMX: Do not generate #DF if #PF happens during exception delivery into L2")
Cc: Jim Mattson <jmattson@google.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Link: https://lore.kernel.org/r/20220830231614.3580124-22-seanjc@google.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Sean Christopherson [Tue, 30 Aug 2022 23:16:07 +0000 (23:16 +0000)]
KVM: nVMX: Document priority of all known events on Intel CPUs
Add a gigantic comment above vmx_check_nested_events() to document the
priorities of all known events on Intel CPUs. Intel's SDM doesn't
include VMX-specific events in its "Priority Among Concurrent Events",
which makes it painfully difficult to suss out the correct priority
between things like Monitor Trap Flag VM-Exits and pending #DBs.
Kudos to Jim Mattson for doing the hard work of collecting and
interpreting the priorities from various locations throughtout the SDM
(because putting them all in one place in the SDM would be too easy).
Cc: Jim Mattson <jmattson@google.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Link: https://lore.kernel.org/r/20220830231614.3580124-21-seanjc@google.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Sean Christopherson [Tue, 30 Aug 2022 23:16:06 +0000 (23:16 +0000)]
KVM: nVMX: Add a helper to identify low-priority #DB traps
Add a helper to identify "low"-priority #DB traps, i.e. trap-like #DBs
that aren't TSS T flag #DBs, and tweak the related code to operate on any
queued exception. A future commit will separate exceptions that are
intercepted by L1, i.e. cause nested VM-Exit, from those that do NOT
trigger nested VM-Exit. I.e. there will be multiple exception structs
and multiple invocations of the helpers.
No functional change intended.
Signed-off-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Link: https://lore.kernel.org/r/20220830231614.3580124-20-seanjc@google.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Sean Christopherson [Tue, 30 Aug 2022 23:16:05 +0000 (23:16 +0000)]
KVM: x86: Evaluate ability to inject SMI/NMI/IRQ after potential VM-Exit
Determine whether or not new events can be injected after checking nested
events. If a VM-Exit occurred during nested event handling, any previous
event that needed re-injection is gone from's KVM perspective; the event
is captured in the vmc*12 VM-Exit information, but doesn't exist in terms
of what needs to be done for entry to L1.
Signed-off-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Link: https://lore.kernel.org/r/20220830231614.3580124-19-seanjc@google.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Sean Christopherson [Tue, 30 Aug 2022 23:16:04 +0000 (23:16 +0000)]
KVM: x86: Hoist nested event checks above event injection logic
Perform nested event checks before re-injecting exceptions/events into
L2. If a pending exception causes VM-Exit to L1, re-injecting events
into vmcs02 is premature and wasted effort. Take care to ensure events
that need to be re-injected are still re-injected if checking for nested
events "fails", i.e. if KVM needs to force an immediate entry+exit to
complete the to-be-re-injecteed event.
Keep the "can_inject" logic the same for now; it too can be pushed below
the nested checks, but is a slightly riskier change (see past bugs about
events not being properly purged on nested VM-Exit).
Add and/or modify comments to better document the various interactions.
Of note is the comment regarding "blocking" previously injected NMIs and
IRQs if an exception is pending. The old comment isn't wrong strictly
speaking, but it failed to capture the reason why the logic even exists.
Signed-off-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Link: https://lore.kernel.org/r/20220830231614.3580124-18-seanjc@google.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Sean Christopherson [Tue, 30 Aug 2022 23:16:03 +0000 (23:16 +0000)]
KVM: x86: Use kvm_queue_exception_e() to queue #DF
Queue #DF by recursing on kvm_multiple_exception() by way of
kvm_queue_exception_e() instead of open coding the behavior. This will
allow KVM to Just Work when a future commit moves exception interception
checks (for L2 => L1) into kvm_multiple_exception().
No functional change intended.
Signed-off-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Link: https://lore.kernel.org/r/20220830231614.3580124-17-seanjc@google.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Sean Christopherson [Tue, 30 Aug 2022 23:16:02 +0000 (23:16 +0000)]
KVM: x86: Formalize blocking of nested pending exceptions
Capture nested_run_pending as block_pending_exceptions so that the logic
of why exceptions are blocked only needs to be documented once instead of
at every place that employs the logic.
No functional change intended.
Signed-off-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Link: https://lore.kernel.org/r/20220830231614.3580124-16-seanjc@google.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Sean Christopherson [Tue, 30 Aug 2022 23:16:01 +0000 (23:16 +0000)]
KVM: x86: Make kvm_queued_exception a properly named, visible struct
Move the definition of "struct kvm_queued_exception" out of kvm_vcpu_arch
in anticipation of adding a second instance in kvm_vcpu_arch to handle
exceptions that occur when vectoring an injected exception and are
morphed to VM-Exit instead of leading to #DF.
Opportunistically take advantage of the churn to rename "nr" to "vector".
No functional change intended.
Signed-off-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Link: https://lore.kernel.org/r/20220830231614.3580124-15-seanjc@google.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Sean Christopherson [Tue, 30 Aug 2022 23:16:00 +0000 (23:16 +0000)]
KVM: x86: Rename kvm_x86_ops.queue_exception to inject_exception
Rename the kvm_x86_ops hook for exception injection to better reflect
reality, and to align with pretty much every other related function name
in KVM.
No functional change intended.
Signed-off-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Link: https://lore.kernel.org/r/20220830231614.3580124-14-seanjc@google.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Sean Christopherson [Tue, 30 Aug 2022 23:15:59 +0000 (23:15 +0000)]
KVM: VMX: Inject #PF on ENCLS as "emulated" #PF
Treat #PFs that occur during emulation of ENCLS as, wait for it, emulated
page faults. Practically speaking, this is a glorified nop as the
exception is never of the nested flavor, and it's extremely unlikely the
guest is relying on the side effect of an implicit INVLPG on the faulting
address.
Fixes: 70210c044b4e ("KVM: VMX: Add SGX ENCLS[ECREATE] handler to enforce CPUID restrictions")
Signed-off-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Link: https://lore.kernel.org/r/20220830231614.3580124-13-seanjc@google.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Sean Christopherson [Tue, 30 Aug 2022 23:15:58 +0000 (23:15 +0000)]
KVM: nVMX: Unconditionally clear mtf_pending on nested VM-Exit
Clear mtf_pending on nested VM-Exit instead of handling the clear on a
case-by-case basis in vmx_check_nested_events(). The pending MTF should
never survive nested VM-Exit, as it is a property of KVM's run of the
current L2, i.e. should never affect the next L2 run by L1. In practice,
this is likely a nop as getting to L1 with nested_run_pending is
impossible, and KVM doesn't correctly handle morphing a pending exception
that occurs on a prior injected exception (need for re-injected exception
being the other case where MTF isn't cleared). However, KVM will
hopefully soon correctly deal with a pending exception on top of an
injected exception.
Add a TODO to document that KVM has an inversion priority bug between
SMIs and MTF (and trap-like #DBS), and that KVM also doesn't properly
save/restore MTF across SMI/RSM.
Signed-off-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Link: https://lore.kernel.org/r/20220830231614.3580124-12-seanjc@google.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Sean Christopherson [Tue, 30 Aug 2022 23:15:57 +0000 (23:15 +0000)]
KVM: nVMX: Ignore SIPI that arrives in L2 when vCPU is not in WFS
Fall through to handling other pending exception/events for L2 if SIPI
is pending while the CPU is not in Wait-for-SIPI. KVM correctly ignores
the event, but incorrectly returns immediately, e.g. a SIPI coincident
with another event could lead to KVM incorrectly routing the event to L1
instead of L2.
Fixes: bf0cd88ce363 ("KVM: x86: emulate wait-for-SIPI and SIPI-VMExit")
Signed-off-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Link: https://lore.kernel.org/r/20220830231614.3580124-11-seanjc@google.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Sean Christopherson [Tue, 30 Aug 2022 23:15:56 +0000 (23:15 +0000)]
KVM: x86: Use DR7_GD macro instead of open coding check in emulator
Use DR7_GD in the emulator instead of open coding the check, and drop a
comically wrong comment.
Signed-off-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Link: https://lore.kernel.org/r/20220830231614.3580124-10-seanjc@google.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Sean Christopherson [Tue, 30 Aug 2022 23:15:55 +0000 (23:15 +0000)]
KVM: x86: Treat #DBs from the emulator as fault-like (code and DR7.GD=1)
Add a dedicated "exception type" for #DBs, as #DBs can be fault-like or
trap-like depending the sub-type of #DB, and effectively defer the
decision of what to do with the #DB to the caller.
For the emulator's two calls to exception_type(), treat the #DB as
fault-like, as the emulator handles only code breakpoint and general
detect #DBs, both of which are fault-like.
For event injection, which uses exception_type() to determine whether to
set EFLAGS.RF=1 on the stack, keep the current behavior of not setting
RF=1 for #DBs. Intel and AMD explicitly state RF isn't set on code #DBs,
so exempting by failing the "== EXCPT_FAULT" check is correct. The only
other fault-like #DB is General Detect, and despite Intel and AMD both
strongly implying (through omission) that General Detect #DBs should set
RF=1, hardware (multiple generations of both Intel and AMD), in fact does
not. Through insider knowledge, extreme foresight, sheer dumb luck, or
some combination thereof, KVM correctly handled RF for General Detect #DBs.
Fixes: 38827dbd3fb8 ("KVM: x86: Do not update EFLAGS on faulting emulation")
Cc: stable@vger.kernel.org
Signed-off-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Link: https://lore.kernel.org/r/20220830231614.3580124-9-seanjc@google.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Sean Christopherson [Tue, 30 Aug 2022 23:15:54 +0000 (23:15 +0000)]
KVM: nVMX: Prioritize TSS T-flag #DBs over Monitor Trap Flag
Service TSS T-flag #DBs prior to pending MTFs, as such #DBs are higher
priority than MTF. KVM itself doesn't emulate TSS #DBs, and any such
exceptions injected from L1 will be handled by hardware (or morphed to
a fault-like exception if injection fails), but theoretically userspace
could pend a TSS T-flag #DB in conjunction with a pending MTF.
Note, there's no known use case this fixes, it's purely to be technically
correct with respect to Intel's SDM.
Cc: Oliver Upton <oupton@google.com>
Cc: Peter Shier <pshier@google.com>
Fixes: 5ef8acbdd687 ("KVM: nVMX: Emulate MTF when performing instruction emulation")
Signed-off-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Link: https://lore.kernel.org/r/20220830231614.3580124-8-seanjc@google.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Sean Christopherson [Tue, 30 Aug 2022 23:15:53 +0000 (23:15 +0000)]
KVM: nVMX: Treat General Detect #DB (DR7.GD=1) as fault-like
Exclude General Detect #DBs, which have fault-like behavior but also have
a non-zero payload (DR6.BD=1), from nVMX's handling of pending debug
traps. Opportunistically rewrite the comment to better document what is
being checked, i.e. "has a non-zero payload" vs. "has a payload", and to
call out the many caveats surrounding #DBs that KVM dodges one way or
another.
Cc: Oliver Upton <oupton@google.com>
Cc: Peter Shier <pshier@google.com>
Fixes: 684c0422da71 ("KVM: nVMX: Handle pending #DB when injecting INIT VM-exit")
Signed-off-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Link: https://lore.kernel.org/r/20220830231614.3580124-7-seanjc@google.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Sean Christopherson [Tue, 30 Aug 2022 23:15:52 +0000 (23:15 +0000)]
KVM: x86: Suppress code #DBs on Intel if MOV/POP SS blocking is active
Suppress code breakpoints if MOV/POP SS blocking is active and the guest
CPU is Intel, i.e. if the guest thinks it's running on an Intel CPU.
Intel CPUs inhibit code #DBs when MOV/POP SS blocking is active, whereas
AMD (and its descendents) do not.
Signed-off-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20220830231614.3580124-6-seanjc@google.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Sean Christopherson [Tue, 30 Aug 2022 23:15:51 +0000 (23:15 +0000)]
KVM: x86: Allow clearing RFLAGS.RF on forced emulation to test code #DBs
Extend force_emulation_prefix to an 'int' and use bit 1 as a flag to
indicate that KVM should clear RFLAGS.RF before emulating, e.g. to allow
tests to force emulation of code breakpoints in conjunction with MOV/POP
SS blocking, which is impossible without KVM intervention as VMX
unconditionally sets RFLAGS.RF on intercepted #UD.
Make the behavior controllable so that tests can also test RFLAGS.RF=1
(again in conjunction with code #DBs).
Note, clearing RFLAGS.RF won't create an infinite #DB loop as the guest's
IRET from the #DB handler will return to the instruction and not the
prefix, i.e. the restart won't force emulation.
Opportunistically convert the permissions to the preferred octal format.
Signed-off-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20220830231614.3580124-5-seanjc@google.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Sean Christopherson [Tue, 30 Aug 2022 23:15:50 +0000 (23:15 +0000)]
KVM: x86: Don't check for code breakpoints when emulating on exception
Don't check for code breakpoints during instruction emulation if the
emulation was triggered by exception interception. Code breakpoints are
the highest priority fault-like exception, and KVM only emulates on
exceptions that are fault-like. Thus, if hardware signaled a different
exception, then the vCPU is already passed the stage of checking for
hardware breakpoints.
This is likely a glorified nop in terms of functionality, and is more for
clarification and is technically an optimization. Intel's SDM explicitly
states vmcs.GUEST_RFLAGS.RF on exception interception is the same as the
value that would have been saved on the stack had the exception not been
intercepted, i.e. will be '1' due to all fault-like exceptions setting RF
to '1'. AMD says "guest state saved ... is the processor state as of the
moment the intercept triggers", but that begs the question, "when does
the intercept trigger?".
Signed-off-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Link: https://lore.kernel.org/r/20220830231614.3580124-4-seanjc@google.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Sean Christopherson [Tue, 30 Aug 2022 23:15:49 +0000 (23:15 +0000)]
KVM: VMX: Drop bits 31:16 when shoving exception error code into VMCS
Deliberately truncate the exception error code when shoving it into the
VMCS (VM-Entry field for vmcs01 and vmcs02, VM-Exit field for vmcs12).
Intel CPUs are incapable of handling 32-bit error codes and will never
generate an error code with bits 31:16, but userspace can provide an
arbitrary error code via KVM_SET_VCPU_EVENTS. Failure to drop the bits
on exception injection results in failed VM-Entry, as VMX disallows
setting bits 31:16. Setting the bits on VM-Exit would at best confuse
L1, and at worse induce a nested VM-Entry failure, e.g. if L1 decided to
reinject the exception back into L2.
Cc: stable@vger.kernel.org
Signed-off-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Jim Mattson <jmattson@google.com>
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Link: https://lore.kernel.org/r/20220830231614.3580124-3-seanjc@google.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Sean Christopherson [Tue, 30 Aug 2022 23:15:48 +0000 (23:15 +0000)]
KVM: nVMX: Unconditionally purge queued/injected events on nested "exit"
Drop pending exceptions and events queued for re-injection when leaving
nested guest mode, even if the "exit" is due to VM-Fail, SMI, or forced
by host userspace. Failure to purge events could result in an event
belonging to L2 being injected into L1.
This _should_ never happen for VM-Fail as all events should be blocked by
nested_run_pending, but it's possible if KVM, not the L1 hypervisor, is
the source of VM-Fail when running vmcs02.
SMI is a nop (barring unknown bugs) as recognition of SMI and thus entry
to SMM is blocked by pending exceptions and re-injected events.
Forced exit is definitely buggy, but has likely gone unnoticed because
userspace probably follows the forced exit with KVM_SET_VCPU_EVENTS (or
some other ioctl() that purges the queue).
Fixes: 4f350c6dbcb9 ("kvm: nVMX: Handle deferred early VMLAUNCH/VMRESUME failure properly")
Cc: stable@vger.kernel.org
Signed-off-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Jim Mattson <jmattson@google.com>
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Link: https://lore.kernel.org/r/20220830231614.3580124-2-seanjc@google.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Hou Wenlong [Fri, 2 Sep 2022 02:47:01 +0000 (10:47 +0800)]
KVM: x86: Add missing trace points for RDMSR/WRMSR in emulator path
Since the RDMSR/WRMSR emulation uses a sepearte emualtor interface,
the trace points for RDMSR/WRMSR can be added in emulator path like
normal path.
Signed-off-by: Hou Wenlong <houwenlong.hwl@antgroup.com>
Link: https://lore.kernel.org/r/39181a9f777a72d61a4d0bb9f6984ccbd1de2ea3.1661930557.git.houwenlong.hwl@antgroup.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Hou Wenlong [Fri, 2 Sep 2022 02:47:00 +0000 (10:47 +0800)]
KVM: x86: Return emulator error if RDMSR/WRMSR emulation failed
The return value of emulator_{get|set}_mst_with_filter() is confused,
since msr access error and emulator error are mixed. Although,
KVM_MSR_RET_* doesn't conflict with X86EMUL_IO_NEEDED at present, it is
better to convert msr access error to emulator error if error value is
needed.
So move "r < 0" handling for wrmsr emulation into the set helper function,
then only X86EMUL_* is returned in the helper functions. Also add "r < 0"
check in the get helper function, although KVM doesn't return -errno
today, but assuming that will always hold true is unnecessarily risking.
Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Hou Wenlong <houwenlong.hwl@antgroup.com>
Link: https://lore.kernel.org/r/09b2847fc3bcb8937fb11738f0ccf7be7f61d9dd.1661930557.git.houwenlong.hwl@antgroup.com
[sean: wrap changelog less aggressively]
Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Jilin Yuan [Wed, 31 Aug 2022 12:52:17 +0000 (20:52 +0800)]
KVM: x86/mmu: fix repeated words in comments
Delete the redundant word 'to'.
Signed-off-by: Jilin Yuan <yuanjilin@cdjrlc.com>
Link: https://lore.kernel.org/r/20220831125217.12313-1-yuanjilin@cdjrlc.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Vitaly Kuznetsov [Tue, 30 Aug 2022 13:37:37 +0000 (15:37 +0200)]
KVM: nVMX: Use cached host MSR_IA32_VMX_MISC value for setting up nested MSR
vmcs_config has cached host MSR_IA32_VMX_MISC value, use it for setting
up nested MSR_IA32_VMX_MISC in nested_vmx_setup_ctls_msrs() and avoid the
redundant rdmsr().
No (real) functional change intended.
Reviewed-by: Jim Mattson <jmattson@google.com>
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20220830133737.1539624-34-vkuznets@redhat.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Vitaly Kuznetsov [Tue, 30 Aug 2022 13:37:36 +0000 (15:37 +0200)]
KVM: VMX: Cache MSR_IA32_VMX_MISC in vmcs_config
Like other host VMX control MSRs, MSR_IA32_VMX_MISC can be cached in
vmcs_config to avoid the need to re-read it later, e.g. from
cpu_has_vmx_intel_pt() or cpu_has_vmx_shadow_vmcs().
No (real) functional change intended.
Reviewed-by: Jim Mattson <jmattson@google.com>
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20220830133737.1539624-33-vkuznets@redhat.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Vitaly Kuznetsov [Tue, 30 Aug 2022 13:37:35 +0000 (15:37 +0200)]
KVM: nVMX: Use sanitized allowed-1 bits for VMX control MSRs
Using raw host MSR values for setting up nested VMX control MSRs is
incorrect as some features need to disabled, e.g. when KVM runs as
a nested hypervisor on Hyper-V and uses Enlightened VMCS or when a
workaround for IA32_PERF_GLOBAL_CTRL is applied. For non-nested VMX, this
is done in setup_vmcs_config() and the result is stored in vmcs_config.
Use it for setting up allowed-1 bits in nested VMX MSRs too.
Suggested-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20220830133737.1539624-32-vkuznets@redhat.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Vitaly Kuznetsov [Tue, 30 Aug 2022 13:37:34 +0000 (15:37 +0200)]
KVM: nVMX: Always set required-1 bits of pinbased_ctls to PIN_BASED_ALWAYSON_WITHOUT_TRUE_MSR
Similar to exit_ctls_low, entry_ctls_low, and procbased_ctls_low,
pinbased_ctls_low should be set to PIN_BASED_ALWAYSON_WITHOUT_TRUE_MSR
and not host's MSR_IA32_VMX_PINBASED_CTLS value |=
PIN_BASED_ALWAYSON_WITHOUT_TRUE_MSR.
The commit
eabeaaccfca0 ("KVM: nVMX: Clean up and fix pin-based
execution controls") which introduced '|=' doesn't mention anything
about why this is needed, the change seems rather accidental.
Note: normally, required-1 portion of MSR_IA32_VMX_PINBASED_CTLS should
be equal to PIN_BASED_ALWAYSON_WITHOUT_TRUE_MSR so no behavioral change
is expected, however, it is (in theory) possible to observe something
different there when e.g. KVM is running as a nested hypervisor. Hope
this doesn't happen in practice.
Reported-by: Jim Mattson <jmattson@google.com>
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20220830133737.1539624-31-vkuznets@redhat.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Vitaly Kuznetsov [Tue, 30 Aug 2022 13:37:33 +0000 (15:37 +0200)]
KVM: VMX: Move LOAD_IA32_PERF_GLOBAL_CTRL errata handling out of setup_vmcs_config()
As a preparation to reusing the result of setup_vmcs_config() for setting
up nested VMX control MSRs, move LOAD_IA32_PERF_GLOBAL_CTRL errata handling
to vmx_vmexit_ctrl()/vmx_vmentry_ctrl() and print the warning from
hardware_setup(). While it seems reasonable to not expose
LOAD_IA32_PERF_GLOBAL_CTRL controls to L1 hypervisor on buggy CPUs,
such change would inevitably break live migration from older KVMs
where the controls are exposed. Keep the status quo for now, L1 hypervisor
itself is supposed to take care of the errata.
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20220830133737.1539624-30-vkuznets@redhat.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Jim Mattson [Tue, 30 Aug 2022 13:37:32 +0000 (15:37 +0200)]
KVM: x86: VMX: Replace some Intel model numbers with mnemonics
Intel processor code names are more familiar to many readers than
their decimal model numbers.
Signed-off-by: Jim Mattson <jmattson@google.com>
Reviewed-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20220830133737.1539624-29-vkuznets@redhat.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Sean Christopherson [Tue, 30 Aug 2022 13:37:31 +0000 (15:37 +0200)]
KVM: VMX: Adjust CR3/INVPLG interception for EPT=y at runtime, not setup
Clear the CR3 and INVLPG interception controls at runtime based on
whether or not EPT is being _used_, as opposed to clearing the bits at
setup if EPT is _supported_ in hardware, and then restoring them when EPT
is not used. Not mucking with the base config will allow using the base
config as the starting point for emulating the VMX capability MSRs.
Signed-off-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Jim Mattson <jmattson@google.com>
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20220830133737.1539624-28-vkuznets@redhat.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Vitaly Kuznetsov [Tue, 30 Aug 2022 13:37:30 +0000 (15:37 +0200)]
KVM: VMX: Add missing CPU based VM execution controls to vmcs_config
As a preparation to reusing the result of setup_vmcs_config() in
nested VMX MSR setup, add the CPU based VM execution controls which KVM
doesn't use but supports for nVMX to KVM_OPT_VMX_CPU_BASED_VM_EXEC_CONTROL
and filter them out in vmx_exec_control().
No functional change intended.
Reviewed-by: Jim Mattson <jmattson@google.com>
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20220830133737.1539624-27-vkuznets@redhat.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Vitaly Kuznetsov [Tue, 30 Aug 2022 13:37:29 +0000 (15:37 +0200)]
KVM: VMX: Add missing VMEXIT controls to vmcs_config
As a preparation to reusing the result of setup_vmcs_config() in
nested VMX MSR setup, add the VMEXIT controls which KVM doesn't
use but supports for nVMX to KVM_OPT_VMX_VM_EXIT_CONTROLS and
filter them out in vmx_vmexit_ctrl().
No functional change intended.
Reviewed-by: Jim Mattson <jmattson@google.com>
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20220830133737.1539624-26-vkuznets@redhat.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Vitaly Kuznetsov [Tue, 30 Aug 2022 13:37:28 +0000 (15:37 +0200)]
KVM: VMX: Move CPU_BASED_CR8_{LOAD,STORE}_EXITING filtering out of setup_vmcs_config()
As a preparation to reusing the result of setup_vmcs_config() in
nested VMX MSR setup, move CPU_BASED_CR8_{LOAD,STORE}_EXITING filtering
to vmx_exec_control().
No functional change intended.
Reviewed-by: Jim Mattson <jmattson@google.com>
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Reviewed-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20220830133737.1539624-25-vkuznets@redhat.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Vitaly Kuznetsov [Tue, 30 Aug 2022 13:37:27 +0000 (15:37 +0200)]
KVM: VMX: Extend VMX controls macro shenanigans
When VMX controls macros are used to set or clear a control bit, make
sure that this bit was checked in setup_vmcs_config() and thus is properly
reflected in vmcs_config.
Opportunistically drop pointless "< 0" check for adjust_vmx_controls()'s
return value.
No functional change intended.
Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20220830133737.1539624-24-vkuznets@redhat.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Sean Christopherson [Tue, 30 Aug 2022 13:37:26 +0000 (15:37 +0200)]
KVM: VMX: Don't toggle VM_ENTRY_IA32E_MODE for 32-bit kernels/KVM
Don't toggle VM_ENTRY_IA32E_MODE in 32-bit kernels/KVM and instead bug
the VM if KVM attempts to run the guest with EFER.LMA=1. KVM doesn't
support running 64-bit guests with 32-bit hosts.
Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20220830133737.1539624-23-vkuznets@redhat.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Vitaly Kuznetsov [Tue, 30 Aug 2022 13:37:25 +0000 (15:37 +0200)]
KVM: VMX: Tweak the special handling of SECONDARY_EXEC_ENCLS_EXITING in setup_vmcs_config()
SECONDARY_EXEC_ENCLS_EXITING is the only control which is conditionally
added to the 'optional' checklist in setup_vmcs_config() but the special
case can be avoided by always checking for its presence first and filtering
out the result later.
Note: the situation when SECONDARY_EXEC_ENCLS_EXITING is present but
cpu_has_sgx() is false is possible when SGX is "soft-disabled", e.g. if
software writes MCE control MSRs or there's an uncorrectable #MC.
Reviewed-by: Jim Mattson <jmattson@google.com>
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20220830133737.1539624-22-vkuznets@redhat.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Vitaly Kuznetsov [Tue, 30 Aug 2022 13:37:24 +0000 (15:37 +0200)]
KVM: VMX: Check CPU_BASED_{INTR,NMI}_WINDOW_EXITING in setup_vmcs_config()
CPU_BASED_{INTR,NMI}_WINDOW_EXITING controls are toggled dynamically by
vmx_enable_{irq,nmi}_window, handle_interrupt_window(), handle_nmi_window()
but setup_vmcs_config() doesn't check their existence. Add the check and
filter the controls out in vmx_exec_control().
Note: KVM explicitly supports CPUs without VIRTUAL_NMIS and all these CPUs
are supposedly lacking NMI_WINDOW_EXITING too. Adjust cpu_has_virtual_nmis()
accordingly.
No functional change intended.
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Reviewed-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20220830133737.1539624-21-vkuznets@redhat.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Vitaly Kuznetsov [Tue, 30 Aug 2022 13:37:23 +0000 (15:37 +0200)]
KVM: VMX: Check VM_ENTRY_IA32E_MODE in setup_vmcs_config()
VM_ENTRY_IA32E_MODE control is toggled dynamically by vmx_set_efer()
and setup_vmcs_config() doesn't check its existence. On the contrary,
nested_vmx_setup_ctls_msrs() doesn set it on x86_64. Add the missing
check and filter the bit out in vmx_vmentry_ctrl().
No (real) functional change intended as all existing CPUs supporting
long mode and VMX are supposed to have it.
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Reviewed-by: Jim Mattson <jmattson@google.com>
Reviewed-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20220830133737.1539624-20-vkuznets@redhat.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Sean Christopherson [Tue, 30 Aug 2022 13:37:22 +0000 (15:37 +0200)]
KVM: nVMX: Always emulate PERF_GLOBAL_CTRL VM-Entry/VM-Exit controls
Advertise VM_{ENTRY,EXIT}_LOAD_IA32_PERF_GLOBAL_CTRL as being supported
for nested VMs irrespective of hardware support. KVM fully emulates
the controls, i.e. manually emulates MSR writes on entry/exit, and never
propagates the guest settings directly to vmcs02.
In addition to allowing L1 VMMs to use the controls on older hardware,
unconditionally advertising the controls will also allow KVM to use its
vmcs01 configuration as the basis for the nested VMX configuration
without causing a regression (due the errata which causes KVM to "hide"
the control from vmcs01 but not vmcs12).
Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20220830133737.1539624-19-vkuznets@redhat.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Sean Christopherson [Tue, 30 Aug 2022 13:37:21 +0000 (15:37 +0200)]
KVM: nVMX: Don't propagate vmcs12's PERF_GLOBAL_CTRL settings to vmcs02
Don't propagate vmcs12's VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL to vmcs02.
KVM doesn't disallow L1 from using VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL
even when KVM itself doesn't use the control, e.g. due to the various
CPU errata that where the MSR can be corrupted on VM-Exit.
Preserve KVM's (vmcs01) setting to hopefully avoid having to toggle the
bit in vmcs02 at a later point. E.g. if KVM is loading PERF_GLOBAL_CTRL
when running L1, then odds are good KVM will also load the MSR when
running L2.
Fixes: 8bf00a529967 ("KVM: VMX: add support for switching of PERF_GLOBAL_CTRL")
Cc: stable@vger.kernel.org
Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Link: https://lore.kernel.org/r/20220830133737.1539624-18-vkuznets@redhat.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Vitaly Kuznetsov [Tue, 30 Aug 2022 13:37:20 +0000 (15:37 +0200)]
KVM: VMX: Get rid of eVMCS specific VMX controls sanitization
With the updated eVMCSv1 definition, there's no known 'problematic'
controls which are exposed in VMX control MSRs but are not present in
eVMCSv1: all known Hyper-V versions either don't expose the new fields
by not setting bits in the VMX feature controls or support the new
eVMCS revision.
Get rid of VMX control MSRs filtering for KVM on Hyper-V.
Note: VMX control MSRs filtering for Hyper-V on KVM
(nested_evmcs_filter_control_msr()) stays as even the updated eVMCSv1
definition doesn't have all the features implemented by KVM and some
fields are still missing. Moreover, nested_evmcs_filter_control_msr()
has to support the original eVMCSv1 version when VMM wishes so.
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20220830133737.1539624-17-vkuznets@redhat.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Vitaly Kuznetsov [Tue, 30 Aug 2022 13:37:19 +0000 (15:37 +0200)]
KVM: nVMX: Support PERF_GLOBAL_CTRL with enlightened VMCS
Enlightened VMCS v1 got updated and now includes the required fields
for loading PERF_GLOBAL_CTRL upon VMENTER/VMEXIT features. For KVM on
Hyper-V enablement, KVM can just observe VMX control MSRs and use the
features (with or without eVMCS) when possible.
Hyper-V on KVM is messier as Windows 11 guests fail to boot if the
controls are advertised and a new PV feature flag, CPUID.0x4000000A.EBX
BIT(0), is not set. Honor the Hyper-V CPUID feature flag to play nice
with Windows guests.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Link: https://lore.kernel.org/r/20220830133737.1539624-16-vkuznets@redhat.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>