platform/kernel/linux-starfive.git
3 years agokvm: x86: annotate RCU pointers
Muhammad Usama Anjum [Fri, 5 Mar 2021 19:11:23 +0000 (00:11 +0500)]
kvm: x86: annotate RCU pointers

This patch adds the annotation to fix the following sparse errors:
arch/x86/kvm//x86.c:8147:15: error: incompatible types in comparison expression (different address spaces):
arch/x86/kvm//x86.c:8147:15:    struct kvm_apic_map [noderef] __rcu *
arch/x86/kvm//x86.c:8147:15:    struct kvm_apic_map *
arch/x86/kvm//x86.c:10628:16: error: incompatible types in comparison expression (different address spaces):
arch/x86/kvm//x86.c:10628:16:    struct kvm_apic_map [noderef] __rcu *
arch/x86/kvm//x86.c:10628:16:    struct kvm_apic_map *
arch/x86/kvm//x86.c:10629:15: error: incompatible types in comparison expression (different address spaces):
arch/x86/kvm//x86.c:10629:15:    struct kvm_pmu_event_filter [noderef] __rcu *
arch/x86/kvm//x86.c:10629:15:    struct kvm_pmu_event_filter *
arch/x86/kvm//lapic.c:267:15: error: incompatible types in comparison expression (different address spaces):
arch/x86/kvm//lapic.c:267:15:    struct kvm_apic_map [noderef] __rcu *
arch/x86/kvm//lapic.c:267:15:    struct kvm_apic_map *
arch/x86/kvm//lapic.c:269:9: error: incompatible types in comparison expression (different address spaces):
arch/x86/kvm//lapic.c:269:9:    struct kvm_apic_map [noderef] __rcu *
arch/x86/kvm//lapic.c:269:9:    struct kvm_apic_map *
arch/x86/kvm//lapic.c:637:15: error: incompatible types in comparison expression (different address spaces):
arch/x86/kvm//lapic.c:637:15:    struct kvm_apic_map [noderef] __rcu *
arch/x86/kvm//lapic.c:637:15:    struct kvm_apic_map *
arch/x86/kvm//lapic.c:994:15: error: incompatible types in comparison expression (different address spaces):
arch/x86/kvm//lapic.c:994:15:    struct kvm_apic_map [noderef] __rcu *
arch/x86/kvm//lapic.c:994:15:    struct kvm_apic_map *
arch/x86/kvm//lapic.c:1036:15: error: incompatible types in comparison expression (different address spaces):
arch/x86/kvm//lapic.c:1036:15:    struct kvm_apic_map [noderef] __rcu *
arch/x86/kvm//lapic.c:1036:15:    struct kvm_apic_map *
arch/x86/kvm//lapic.c:1173:15: error: incompatible types in comparison expression (different address spaces):
arch/x86/kvm//lapic.c:1173:15:    struct kvm_apic_map [noderef] __rcu *
arch/x86/kvm//lapic.c:1173:15:    struct kvm_apic_map *
arch/x86/kvm//pmu.c:190:18: error: incompatible types in comparison expression (different address spaces):
arch/x86/kvm//pmu.c:190:18:    struct kvm_pmu_event_filter [noderef] __rcu *
arch/x86/kvm//pmu.c:190:18:    struct kvm_pmu_event_filter *
arch/x86/kvm//pmu.c:251:18: error: incompatible types in comparison expression (different address spaces):
arch/x86/kvm//pmu.c:251:18:    struct kvm_pmu_event_filter [noderef] __rcu *
arch/x86/kvm//pmu.c:251:18:    struct kvm_pmu_event_filter *
arch/x86/kvm//pmu.c:522:18: error: incompatible types in comparison expression (different address spaces):
arch/x86/kvm//pmu.c:522:18:    struct kvm_pmu_event_filter [noderef] __rcu *
arch/x86/kvm//pmu.c:522:18:    struct kvm_pmu_event_filter *
arch/x86/kvm//pmu.c:522:18: error: incompatible types in comparison expression (different address spaces):
arch/x86/kvm//pmu.c:522:18:    struct kvm_pmu_event_filter [noderef] __rcu *
arch/x86/kvm//pmu.c:522:18:    struct kvm_pmu_event_filter *

Signed-off-by: Muhammad Usama Anjum <musamaanjum@gmail.com>
Message-Id: <20210305191123.GA497469@LEGION>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: arm64: Fix exclusive limit for IPA size
Marc Zyngier [Thu, 11 Mar 2021 10:00:16 +0000 (10:00 +0000)]
KVM: arm64: Fix exclusive limit for IPA size

When registering a memslot, we check the size and location of that
memslot against the IPA size to ensure that we can provide guest
access to the whole of the memory.

Unfortunately, this check rejects memslot that end-up at the exact
limit of the addressing capability for a given IPA size. For example,
it refuses the creation of a 2GB memslot at 0x8000000 with a 32bit
IPA space.

Fix it by relaxing the check to accept a memslot reaching the
limit of the IPA space.

Fixes: c3058d5da222 ("arm/arm64: KVM: Ensure memslots are within KVM_PHYS_SIZE")
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Cc: stable@vger.kernel.org
Reviewed-by: Andrew Jones <drjones@redhat.com>
Link: https://lore.kernel.org/r/20210311100016.3830038-3-maz@kernel.org
3 years agoKVM: arm64: Reject VM creation when the default IPA size is unsupported
Marc Zyngier [Thu, 11 Mar 2021 10:00:15 +0000 (10:00 +0000)]
KVM: arm64: Reject VM creation when the default IPA size is unsupported

KVM/arm64 has forever used a 40bit default IPA space, partially
due to its 32bit heritage (where the only choice is 40bit).

However, there are implementations in the wild that have a *cough*
much smaller *cough* IPA space, which leads to a misprogramming of
VTCR_EL2, and a guest that is stuck on its first memory access
if userspace dares to ask for the default IPA setting (which most
VMMs do).

Instead, blundly reject the creation of such VM, as we can't
satisfy the requirements from userspace (with a one-off warning).
Also clarify the boot warning, and document that the VM creation
will fail when an unsupported IPA size is provided.

Although this is an ABI change, it doesn't really change much
for userspace:

- the guest couldn't run before this change, but no error was
  returned. At least userspace knows what is happening.

- a memory slot that was accepted because it did fit the default
  IPA space now doesn't even get a chance to be registered.

The other thing that is left doing is to convince userspace to
actually use the IPA space setting instead of relying on the
antiquated default.

Fixes: 233a7cb23531 ("kvm: arm64: Allow tuning the physical address size for VM")
Signed-off-by: Marc Zyngier <maz@kernel.org>
Cc: stable@vger.kernel.org
Reviewed-by: Andrew Jones <drjones@redhat.com>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Link: https://lore.kernel.org/r/20210311100016.3830038-2-maz@kernel.org
3 years agoKVM: arm64: Ensure I-cache isolation between vcpus of a same VM
Marc Zyngier [Wed, 3 Mar 2021 16:45:05 +0000 (16:45 +0000)]
KVM: arm64: Ensure I-cache isolation between vcpus of a same VM

It recently became apparent that the ARMv8 architecture has interesting
rules regarding attributes being used when fetching instructions
if the MMU is off at Stage-1.

In this situation, the CPU is allowed to fetch from the PoC and
allocate into the I-cache (unless the memory is mapped with
the XN attribute at Stage-2).

If we transpose this to vcpus sharing a single physical CPU,
it is possible for a vcpu running with its MMU off to influence
another vcpu running with its MMU on, as the latter is expected to
fetch from the PoU (and self-patching code doesn't flush below that
level).

In order to solve this, reuse the vcpu-private TLB invalidation
code to apply the same policy to the I-cache, nuking it every time
the vcpu runs on a physical CPU that ran another vcpu of the same
VM in the past.

This involve renaming __kvm_tlb_flush_local_vmid() to
__kvm_flush_cpu_context(), and inserting a local i-cache invalidation
there.

Cc: stable@vger.kernel.org
Signed-off-by: Marc Zyngier <maz@kernel.org>
Acked-by: Will Deacon <will@kernel.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20210303164505.68492-1-maz@kernel.org
3 years agoKVM: arm64: Don't use cbz/adr with external symbols
Sami Tolvanen [Fri, 5 Mar 2021 20:21:24 +0000 (12:21 -0800)]
KVM: arm64: Don't use cbz/adr with external symbols

allmodconfig + CONFIG_LTO_CLANG_THIN=y fails to build due to following
linker errors:

  ld.lld: error: irqbypass.c:(function __guest_enter: .text+0x21CC):
  relocation R_AARCH64_CONDBR19 out of range: 2031220 is not in
  [-1048576, 1048575]; references hyp_panic
  >>> defined in vmlinux.o

  ld.lld: error: irqbypass.c:(function __guest_enter: .text+0x21E0):
  relocation R_AARCH64_ADR_PREL_LO21 out of range: 2031200 is not in
  [-1048576, 1048575]; references hyp_panic
  >>> defined in vmlinux.o

This is because with LTO, the compiler ends up placing hyp_panic()
more than 1MB away from __guest_enter(). Use an unconditional branch
and adr_l instead to fix the issue.

Link: https://github.com/ClangBuiltLinux/linux/issues/1317
Reported-by: Nathan Chancellor <nathan@kernel.org>
Suggested-by: Marc Zyngier <maz@kernel.org>
Suggested-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Acked-by: Will Deacon <will@kernel.org>
Tested-by: Nathan Chancellor <nathan@kernel.org>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20210305202124.3768527-1-samitolvanen@google.com
3 years agoKVM: arm64: Fix range alignment when walking page tables
Jia He [Fri, 5 Mar 2021 18:52:54 +0000 (18:52 +0000)]
KVM: arm64: Fix range alignment when walking page tables

When walking the page tables at a given level, and if the start
address for the range isn't aligned for that level, we propagate
the misalignment on each iteration at that level.

This results in the walker ignoring a number of entries (depending
on the original misalignment) on each subsequent iteration.

Properly aligning the address before the next iteration addresses
this issue.

Cc: stable@vger.kernel.org
Reported-by: Howard Zhang <Howard.Zhang@arm.com>
Acked-by: Will Deacon <will@kernel.org>
Signed-off-by: Jia He <justin.he@arm.com>
Fixes: b1e57de62cfb ("KVM: arm64: Add stand-alone page-table walker infrastructure")
[maz: rewrite commit message]
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20210303024225.2591-1-justin.he@arm.com
Message-Id: <20210305185254.3730990-9-maz@kernel.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: arm64: Workaround firmware wrongly advertising GICv2-on-v3 compatibility
Marc Zyngier [Fri, 5 Mar 2021 18:52:53 +0000 (18:52 +0000)]
KVM: arm64: Workaround firmware wrongly advertising GICv2-on-v3 compatibility

It looks like we have broken firmware out there that wrongly advertises
a GICv2 compatibility interface, despite the CPUs not being able to deal
with it.

To work around this, check that the CPU initialising KVM is actually able
to switch to MMIO instead of system registers, and use that as a
precondition to enable GICv2 compatibility in KVM.

Note that the detection happens on a single CPU. If the firmware is
lying *and* that the CPUs are asymetric, all hope is lost anyway.

Reported-by: Shameerali Kolothum Thodi <shameerali.kolothum.thodi@huawei.com>
Tested-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Message-Id: <20210305185254.3730990-8-maz@kernel.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: arm64: Rename __vgic_v3_get_ich_vtr_el2() to __vgic_v3_get_gic_config()
Marc Zyngier [Fri, 5 Mar 2021 18:52:52 +0000 (18:52 +0000)]
KVM: arm64: Rename __vgic_v3_get_ich_vtr_el2() to __vgic_v3_get_gic_config()

As we are about to report a bit more information to the rest of
the kernel, rename __vgic_v3_get_ich_vtr_el2() to the more
explicit __vgic_v3_get_gic_config().

No functional change.

Tested-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Message-Id: <20210305185254.3730990-7-maz@kernel.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: arm64: Don't access PMSELR_EL0/PMUSERENR_EL0 when no PMU is available
Marc Zyngier [Fri, 5 Mar 2021 18:52:51 +0000 (18:52 +0000)]
KVM: arm64: Don't access PMSELR_EL0/PMUSERENR_EL0 when no PMU is available

When running under a nesting hypervisor, it isn't guaranteed that
the virtual HW will include a PMU. In which case, let's not try
to access the PMU registers in the world switch, as that'd be
deadly.

Reported-by: Andre Przywara <andre.przywara@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Reviewed-by: Alexandru Elisei <alexandru.elisei@arm.com>
Link: https://lore.kernel.org/r/20210209114844.3278746-3-maz@kernel.org
Message-Id: <20210305185254.3730990-6-maz@kernel.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: arm64: Turn kvm_arm_support_pmu_v3() into a static key
Marc Zyngier [Fri, 5 Mar 2021 18:52:50 +0000 (18:52 +0000)]
KVM: arm64: Turn kvm_arm_support_pmu_v3() into a static key

We currently find out about the presence of a HW PMU (or the handling
of that PMU by perf, which amounts to the same thing) in a fairly
roundabout way, by checking the number of counters available to perf.
That's good enough for now, but we will soon need to find about about
that on paths where perf is out of reach (in the world switch).

Instead, let's turn kvm_arm_support_pmu_v3() into a static key.

Signed-off-by: Marc Zyngier <maz@kernel.org>
Reviewed-by: Alexandru Elisei <alexandru.elisei@arm.com>
Link: https://lore.kernel.org/r/20210209114844.3278746-2-maz@kernel.org
Message-Id: <20210305185254.3730990-5-maz@kernel.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: arm64: Fix nVHE hyp panic host context restore
Andrew Scull [Fri, 5 Mar 2021 18:52:49 +0000 (18:52 +0000)]
KVM: arm64: Fix nVHE hyp panic host context restore

When panicking from the nVHE hyp and restoring the host context, x29 is
expected to hold a pointer to the host context. This wasn't being done
so fix it to make sure there's a valid pointer the host context being
used.

Rather than passing a boolean indicating whether or not the host context
should be restored, instead pass the pointer to the host context. NULL
is passed to indicate that no context should be restored.

Fixes: a2e102e20fd6 ("KVM: arm64: nVHE: Handle hyp panics")
Cc: stable@vger.kernel.org
Signed-off-by: Andrew Scull <ascull@google.com>
[maz: partial rewrite to fit 5.12-rc1]
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20210219122406.1337626-1-ascull@google.com
Message-Id: <20210305185254.3730990-4-maz@kernel.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: arm64: Avoid corrupting vCPU context register in guest exit
Will Deacon [Fri, 5 Mar 2021 18:52:48 +0000 (18:52 +0000)]
KVM: arm64: Avoid corrupting vCPU context register in guest exit

Commit 7db21530479f ("KVM: arm64: Restore hyp when panicking in guest
context") tracks the currently running vCPU, clearing the pointer to
NULL on exit from a guest.

Unfortunately, the use of 'set_loaded_vcpu' clobbers x1 to point at the
kvm_hyp_ctxt instead of the vCPU context, causing the subsequent RAS
code to go off into the weeds when it saves the DISR assuming that the
CPU context is embedded in a struct vCPU.

Leave x1 alone and use x3 as a temporary register instead when clearing
the vCPU on the guest exit path.

Cc: Marc Zyngier <maz@kernel.org>
Cc: Andrew Scull <ascull@google.com>
Cc: <stable@vger.kernel.org>
Fixes: 7db21530479f ("KVM: arm64: Restore hyp when panicking in guest context")
Suggested-by: Quentin Perret <qperret@google.com>
Signed-off-by: Will Deacon <will@kernel.org>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20210226181211.14542-1-will@kernel.org
Message-Id: <20210305185254.3730990-3-maz@kernel.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: arm64: nvhe: Save the SPE context early
Suzuki K Poulose [Fri, 5 Mar 2021 18:52:47 +0000 (18:52 +0000)]
KVM: arm64: nvhe: Save the SPE context early

The nVHE KVM hyp drains and disables the SPE buffer, before
entering the guest, as the EL1&0 translation regime
is going to be loaded with that of the guest.

But this operation is performed way too late, because :
  - The owning translation regime of the SPE buffer
    is transferred to EL2. (MDCR_EL2_E2PB == 0)
  - The guest Stage1 is loaded.

Thus the flush could use the host EL1 virtual address,
but use the EL2 translations instead of host EL1, for writing
out any cached data.

Fix this by moving the SPE buffer handling early enough.
The restore path is doing the right thing.

Fixes: 014c4c77aad7 ("KVM: arm64: Improve debug register save/restore flow")
Cc: stable@vger.kernel.org
Cc: Christoffer Dall <christoffer.dall@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Will Deacon <will@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Alexandru Elisei <alexandru.elisei@arm.com>
Reviewed-by: Alexandru Elisei <alexandru.elisei@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20210302120345.3102874-1-suzuki.poulose@arm.com
Message-Id: <20210305185254.3730990-2-maz@kernel.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agokvm: x86: use NULL instead of using plain integer as pointer
Muhammad Usama Anjum [Fri, 5 Mar 2021 18:08:16 +0000 (23:08 +0500)]
kvm: x86: use NULL instead of using plain integer as pointer

Sparse warnings removed:
warning: Using plain integer as NULL pointer

Signed-off-by: Muhammad Usama Anjum <musamaanjum@gmail.com>
Message-Id: <20210305180816.GA488770@LEGION>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: SVM: Connect 'npt' module param to KVM's internal 'npt_enabled'
Sean Christopherson [Fri, 5 Mar 2021 02:16:37 +0000 (18:16 -0800)]
KVM: SVM: Connect 'npt' module param to KVM's internal 'npt_enabled'

Directly connect the 'npt' param to the 'npt_enabled' variable so that
runtime adjustments to npt_enabled are reflected in sysfs.  Move the
!PAE restriction to a runtime check to ensure NPT is forced off if the
host is using 2-level paging, and add a comment explicitly stating why
NPT requires a 64-bit kernel or a kernel with PAE enabled.

Opportunistically switch the param to octal permissions.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20210305021637.3768573-1-seanjc@google.com>
Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: x86: Ensure deadline timer has truly expired before posting its IRQ
Sean Christopherson [Fri, 5 Mar 2021 02:18:08 +0000 (18:18 -0800)]
KVM: x86: Ensure deadline timer has truly expired before posting its IRQ

When posting a deadline timer interrupt, open code the checks guarding
__kvm_wait_lapic_expire() in order to skip the lapic_timer_int_injected()
check in kvm_wait_lapic_expire().  The injection check will always fail
since the interrupt has not yet be injected.  Moving the call after
injection would also be wrong as that wouldn't actually delay delivery
of the IRQ if it is indeed sent via posted interrupt.

Fixes: 010fd37fddf6 ("KVM: LAPIC: Reduce world switch latency caused by timer_advance_ns")
Cc: stable@vger.kernel.org
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20210305021808.3769732-1-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: SVM: Clear the CR4 register on reset
Babu Moger [Tue, 2 Mar 2021 18:51:31 +0000 (12:51 -0600)]
KVM: SVM: Clear the CR4 register on reset

This problem was reported on a SVM guest while executing kexec.
Kexec fails to load the new kernel when the PCID feature is enabled.

When kexec starts loading the new kernel, it starts the process by
resetting the vCPU's and then bringing each vCPU online one by one.
The vCPU reset is supposed to reset all the register states before the
vCPUs are brought online. However, the CR4 register is not reset during
this process. If this register is already setup during the last boot,
all the flags can remain intact. The X86_CR4_PCIDE bit can only be
enabled in long mode. So, it must be enabled much later in SMP
initialization.  Having the X86_CR4_PCIDE bit set during SMP boot can
cause a boot failures.

Fix the issue by resetting the CR4 register in init_vmcb().

Signed-off-by: Babu Moger <babu.moger@amd.com>
Message-Id: <161471109108.30811.6392805173629704166.stgit@bmoger-ubuntu>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: x86/xen: Add support for vCPU runstate information
David Woodhouse [Mon, 1 Mar 2021 12:53:09 +0000 (12:53 +0000)]
KVM: x86/xen: Add support for vCPU runstate information

This is how Xen guests do steal time accounting. The hypervisor records
the amount of time spent in each of running/runnable/blocked/offline
states.

In the Xen accounting, a vCPU is still in state RUNSTATE_running while
in Xen for a hypercall or I/O trap, etc. Only if Xen explicitly schedules
does the state become RUNSTATE_blocked. In KVM this means that even when
the vCPU exits the kvm_run loop, the state remains RUNSTATE_running.

The VMM can explicitly set the vCPU to RUNSTATE_blocked by using the
KVM_XEN_VCPU_ATTR_TYPE_RUNSTATE_CURRENT attribute, and can also use
KVM_XEN_VCPU_ATTR_TYPE_RUNSTATE_ADJUST to retrospectively add a given
amount of time to the blocked state and subtract it from the running
state.

The state_entry_time corresponds to get_kvmclock_ns() at the time the
vCPU entered the current state, and the total times of all four states
should always add up to state_entry_time.

Co-developed-by: Joao Martins <joao.m.martins@oracle.com>
Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Message-Id: <20210301125309.874953-2-dwmw2@infradead.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: x86/xen: Fix return code when clearing vcpu_info and vcpu_time_info
David Woodhouse [Mon, 1 Mar 2021 12:53:08 +0000 (12:53 +0000)]
KVM: x86/xen: Fix return code when clearing vcpu_info and vcpu_time_info

When clearing the per-vCPU shared regions, set the return value to zero
to indicate success. This was causing spurious errors to be returned to
userspace on soft reset.

Also add a paranoid BUILD_BUG_ON() for compat structure compatibility.

Fixes: 0c165b3c01fe ("KVM: x86/xen: Allow reset of Xen attributes")
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Message-Id: <20210301125309.874953-1-dwmw2@infradead.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoselftests: kvm: Mmap the entire vcpu mmap area
Aaron Lewis [Wed, 10 Feb 2021 16:50:36 +0000 (08:50 -0800)]
selftests: kvm: Mmap the entire vcpu mmap area

The vcpu mmap area may consist of more than just the kvm_run struct.
Allocate enough space for the entire vcpu mmap area. Without this, on
x86, the PIO page, for example, will be missing.  This is problematic
when dealing with an unhandled exception from the guest as the exception
vector will be incorrectly reported as 0x0.

Message-Id: <20210210165035.3712489-1-aaronlewis@google.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Co-developed-by: Steve Rutherford <srutherford@google.com>
Signed-off-by: Aaron Lewis <aaronlewis@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: Documentation: Fix index for KVM_CAP_PPC_DAWR1
Kai Huang [Fri, 26 Feb 2021 09:48:32 +0000 (22:48 +1300)]
KVM: Documentation: Fix index for KVM_CAP_PPC_DAWR1

It should be 7.23 instead of 7.22, which has already been taken by
KVM_CAP_X86_BUS_LOCK_EXIT.

Signed-off-by: Kai Huang <kai.huang@intel.com>
Message-Id: <20210226094832.380394-1-kai.huang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: x86: allow compiling out the Xen hypercall interface
Paolo Bonzini [Fri, 26 Feb 2021 09:54:45 +0000 (04:54 -0500)]
KVM: x86: allow compiling out the Xen hypercall interface

The Xen hypercall interface adds to the attack surface of the hypervisor
and will be used quite rarely.  Allow compiling it out.

Suggested-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: xen: flush deferred static key before checking it
Paolo Bonzini [Fri, 26 Feb 2021 09:49:06 +0000 (04:49 -0500)]
KVM: xen: flush deferred static key before checking it

A missing flush would cause the static branch to trigger incorrectly.

Cc: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: x86/mmu: Set SPTE_AD_WRPROT_ONLY_MASK if and only if PML is enabled
Sean Christopherson [Thu, 25 Feb 2021 20:47:26 +0000 (12:47 -0800)]
KVM: x86/mmu: Set SPTE_AD_WRPROT_ONLY_MASK if and only if PML is enabled

Check that PML is actually enabled before setting the mask to force a
SPTE to be write-protected.  The bits used for the !AD_ENABLED case are
in the upper half of the SPTE.  With 64-bit paging and EPT, these bits
are ignored, but with 32-bit PAE paging they are reserved.  Setting them
for L2 SPTEs without checking PML breaks NPT on 32-bit KVM.

Fixes: 1f4e5fc83a42 ("KVM: x86: fix nested guest live migration with PML")
Cc: stable@vger.kernel.org
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20210225204749.1512652-2-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: x86: hyper-v: Fix Hyper-V context null-ptr-deref
Wanpeng Li [Fri, 26 Feb 2021 07:59:59 +0000 (15:59 +0800)]
KVM: x86: hyper-v: Fix Hyper-V context null-ptr-deref

Reported by syzkaller:

    KASAN: null-ptr-deref in range [0x0000000000000140-0x0000000000000147]
    CPU: 1 PID: 8370 Comm: syz-executor859 Not tainted 5.11.0-syzkaller #0
    RIP: 0010:synic_get arch/x86/kvm/hyperv.c:165 [inline]
    RIP: 0010:kvm_hv_set_sint_gsi arch/x86/kvm/hyperv.c:475 [inline]
    RIP: 0010:kvm_hv_irq_routing_update+0x230/0x460 arch/x86/kvm/hyperv.c:498
    Call Trace:
     kvm_set_irq_routing+0x69b/0x940 arch/x86/kvm/../../../virt/kvm/irqchip.c:223
     kvm_vm_ioctl+0x12d0/0x2800 arch/x86/kvm/../../../virt/kvm/kvm_main.c:3959
     vfs_ioctl fs/ioctl.c:48 [inline]
     __do_sys_ioctl fs/ioctl.c:753 [inline]
     __se_sys_ioctl fs/ioctl.c:739 [inline]
     __x64_sys_ioctl+0x193/0x200 fs/ioctl.c:739
     do_syscall_64+0x2d/0x70 arch/x86/entry/common.c:46
     entry_SYSCALL_64_after_hwframe+0x44/0xae

Hyper-V context is lazily allocated until Hyper-V specific MSRs are accessed
or SynIC is enabled. However, the syzkaller testcase sets irq routing table
directly w/o enabling SynIC. This results in null-ptr-deref when accessing
SynIC Hyper-V context. This patch fixes it.

syzkaller source: https://syzkaller.appspot.com/x/repro.c?x=163342ccd00000

Reported-by: syzbot+6987f3b2dbd9eda95f12@syzkaller.appspotmail.com
Fixes: 8f014550dfb1 ("KVM: x86: hyper-v: Make Hyper-V emulation enablement conditional")
Signed-off-by: Wanpeng Li <wanpengli@tencent.com>
Message-Id: <1614326399-5762-1-git-send-email-wanpengli@tencent.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: x86: remove misplaced comment on active_mmu_pages
Dongli Zhang [Fri, 26 Feb 2021 06:19:45 +0000 (22:19 -0800)]
KVM: x86: remove misplaced comment on active_mmu_pages

The 'mmu_page_hash' is used as hash table while 'active_mmu_pages' is a
list. Remove the misplaced comment as it's mostly stating the obvious
anyways.

Signed-off-by: Dongli Zhang <dongli.zhang@oracle.com>
Reviewed-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20210226061945.1222-1-dongli.zhang@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: Documentation: rectify rst markup in kvm_run->flags
Chenyi Qiang [Fri, 26 Feb 2021 07:55:41 +0000 (15:55 +0800)]
KVM: Documentation: rectify rst markup in kvm_run->flags

Commit c32b1b896d2a ("KVM: X86: Add the Document for
KVM_CAP_X86_BUS_LOCK_EXIT") added a new flag in kvm_run->flags
documentation, and caused warning in make htmldocs:

  Documentation/virt/kvm/api.rst:5004: WARNING: Unexpected indentation
  Documentation/virt/kvm/api.rst:5004: WARNING: Inline emphasis start-string without end-string

Fix this rst markup issue.

Signed-off-by: Chenyi Qiang <chenyi.qiang@intel.com>
Message-Id: <20210226075541.27179-1-chenyi.qiang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoDocumentation: kvm: fix messy conversion from .txt to .rst
Paolo Bonzini [Thu, 25 Feb 2021 13:37:01 +0000 (08:37 -0500)]
Documentation: kvm: fix messy conversion from .txt to .rst

Building the documentation gives a warning that the KVM_PPC_RESIZE_HPT_PREPARE
label is defined twice.  The root cause is that the KVM_PPC_RESIZE_HPT_PREPARE
API is present twice, the second being a mix of the prepare and commit APIs.
Fix it.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: SVM: Fix nested VM-Exit on #GP interception handling
Sean Christopherson [Wed, 24 Feb 2021 00:56:26 +0000 (16:56 -0800)]
KVM: SVM: Fix nested VM-Exit on #GP interception handling

Fix the interpreation of nested_svm_vmexit()'s return value when
synthesizing a nested VM-Exit after intercepting an SVM instruction while
L2 was running.  The helper returns '0' on success, whereas a return
value of '0' in the exit handler path means "exit to userspace".  The
incorrect return value causes KVM to exit to userspace without filling
the run state, e.g. QEMU logs "KVM: unknown exit, hardware reason 0".

Fixes: 14c2bf81fcd2 ("KVM: SVM: Fix #GP handling for doubly-nested virtualization")
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20210224005627.657028-1-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: vmx/pmu: Fix dummy check if lbr_desc->event is created
Like Xu [Tue, 23 Feb 2021 01:39:57 +0000 (09:39 +0800)]
KVM: vmx/pmu: Fix dummy check if lbr_desc->event is created

If lbr_desc->event is successfully created, the intel_pmu_create_
guest_lbr_event() will return 0, otherwise it will return -ENOENT,
and then jump to LBR msrs dummy handling.

Fixes: 1b5ac3226a1a ("KVM: vmx/pmu: Pass-through LBR msrs when the guest LBR event is ACTIVE")
Signed-off-by: Like Xu <like.xu@linux.intel.com>
Message-Id: <20210223013958.1280444-1-like.xu@linux.intel.com>
[Add "< 0" and PTR_ERR to make the code clearer. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: x86/mmu: Consider the hva in mmu_notifier retry
David Stevens [Mon, 22 Feb 2021 02:45:22 +0000 (11:45 +0900)]
KVM: x86/mmu: Consider the hva in mmu_notifier retry

Track the range being invalidated by mmu_notifier and skip page fault
retries if the fault address is not affected by the in-progress
invalidation. Handle concurrent invalidations by finding the minimal
range which includes all ranges being invalidated. Although the combined
range may include unrelated addresses and cannot be shrunk as individual
invalidation operations complete, it is unlikely the marginal gains of
proper range tracking are worth the additional complexity.

The primary benefit of this change is the reduction in the likelihood of
extreme latency when handing a page fault due to another thread having
been preempted while modifying host virtual addresses.

Signed-off-by: David Stevens <stevensd@chromium.org>
Message-Id: <20210222024522.1751719-3-stevensd@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: x86/mmu: Skip mmu_notifier check when handling MMIO page fault
Sean Christopherson [Mon, 22 Feb 2021 02:45:21 +0000 (11:45 +0900)]
KVM: x86/mmu: Skip mmu_notifier check when handling MMIO page fault

Don't retry a page fault due to an mmu_notifier invalidation when
handling a page fault for a GPA that did not resolve to a memslot, i.e.
an MMIO page fault.  Invalidations from the mmu_notifier signal a change
in a host virtual address (HVA) mapping; without a memslot, there is no
HVA and thus no possibility that the invalidation is relevant to the
page fault being handled.

Note, the MMIO vs. memslot generation checks handle the case where a
pending memslot will create a memslot overlapping the faulting GPA.  The
mmu_notifier checks are orthogonal to memslot updates.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20210222024522.1751719-2-stevensd@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: Documentation: rectify rst markup in KVM_GET_SUPPORTED_HV_CPUID
Lukas Bulwahn [Mon, 4 Jan 2021 09:59:38 +0000 (10:59 +0100)]
KVM: Documentation: rectify rst markup in KVM_GET_SUPPORTED_HV_CPUID

Commit c21d54f0307f ("KVM: x86: hyper-v: allow KVM_GET_SUPPORTED_HV_CPUID
as a system ioctl") added an enumeration in the KVM_GET_SUPPORTED_HV_CPUID
documentation improperly for rst, and caused new warnings in make htmldocs:

  Documentation/virt/kvm/api.rst:4536: WARNING: Unexpected indentation.
  Documentation/virt/kvm/api.rst:4538: WARNING: Block quote ends without a blank line; unexpected unindent.

Fix that issue and another historic rst markup issue from the initial
rst conversion in the KVM_GET_SUPPORTED_HV_CPUID documentation.

Signed-off-by: Lukas Bulwahn <lukas.bulwahn@gmail.com>
Message-Id: <20210104095938.24838-1-lukas.bulwahn@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: nSVM: prepare guest save area while is_guest_mode is true
Paolo Bonzini [Thu, 18 Feb 2021 14:55:39 +0000 (09:55 -0500)]
KVM: nSVM: prepare guest save area while is_guest_mode is true

Right now, enter_svm_guest_mode is calling nested_prepare_vmcb_save and
nested_prepare_vmcb_control.  This results in is_guest_mode being false
until the end of nested_prepare_vmcb_control.

This is a problem because nested_prepare_vmcb_save can in turn cause
changes to the intercepts and these have to be applied to the "host VMCB"
(stored in svm->nested.hsave) and then merged with the VMCB12 intercepts
into svm->vmcb.

In particular, without this change we forget to set the CR0 read and CR0
write intercepts when running a real mode L2 guest with NPT disabled.
The guest is therefore able to see the CR0.PG bit that KVM sets to
enable "paged real mode".  This patch fixes the svm.flat mode_switch
test case with npt=0.  There are no other problematic calls in
nested_prepare_vmcb_save.

Moving is_guest_mode to the end is done since commit 06fc7772690d
("KVM: SVM: Activate nested state only when guest state is complete",
2010-04-25).  However, back then KVM didn't grab a different VMCB
when updating the intercepts, it had already copied/merged L1's stuff
to L0's VMCB, and then updated L0's VMCB regardless of is_nested().
Later recalc_intercepts was introduced in commit 384c63684397
("KVM: SVM: Add function to recalculate intercept masks", 2011-01-12).
This introduced the bug, because recalc_intercepts now throws away
the intercept manipulations that svm_set_cr0 had done in the meanwhile
to svm->vmcb.

[1] https://lore.kernel.org/kvm/1266493115-28386-1-git-send-email-joerg.roedel@amd.com/

Reviewed-by: Sean Christopherson <seanjc@google.com>
Tested-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: x86/mmu: Remove a variety of unnecessary exports
Sean Christopherson [Sat, 13 Feb 2021 00:50:15 +0000 (16:50 -0800)]
KVM: x86/mmu: Remove a variety of unnecessary exports

Remove several exports from the MMU that are no longer necessary.

No functional change intended.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20210213005015.1651772-15-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: x86: Fold "write-protect large" use case into generic write-protect
Sean Christopherson [Sat, 13 Feb 2021 00:50:14 +0000 (16:50 -0800)]
KVM: x86: Fold "write-protect large" use case into generic write-protect

Drop kvm_mmu_slot_largepage_remove_write_access() and refactor its sole
caller to use kvm_mmu_slot_remove_write_access().  Remove the now-unused
slot_handle_large_level() and slot_handle_all_level() helpers.

No functional change intended.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20210213005015.1651772-14-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: x86/mmu: Don't set dirty bits when disabling dirty logging w/ PML
Sean Christopherson [Sat, 13 Feb 2021 00:50:13 +0000 (16:50 -0800)]
KVM: x86/mmu: Don't set dirty bits when disabling dirty logging w/ PML

Stop setting dirty bits for MMU pages when dirty logging is disabled for
a memslot, as PML is now completely disabled when there are no memslots
with dirty logging enabled.

This means that spurious PML entries will be created for memslots with
dirty logging disabled if at least one other memslot has dirty logging
enabled.  However, spurious PML entries are already possible since
dirty bits are set only when a dirty logging is turned off, i.e. memslots
that are never dirty logged will have dirty bits cleared.

In the end, it's faster overall to eat a few spurious PML entries in the
window where dirty logging is being disabled across all memslots.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20210213005015.1651772-13-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: VMX: Dynamically enable/disable PML based on memslot dirty logging
Makarand Sonare [Sat, 13 Feb 2021 00:50:12 +0000 (16:50 -0800)]
KVM: VMX: Dynamically enable/disable PML based on memslot dirty logging

Currently, if enable_pml=1 PML remains enabled for the entire lifetime
of the VM irrespective of whether dirty logging is enable or disabled.
When dirty logging is disabled, all the pages of the VM are manually
marked dirty, so that PML is effectively non-operational.  Setting
the dirty bits is an expensive operation which can cause severe MMU
lock contention in a performance sensitive path when dirty logging is
disabled after a failed or canceled live migration.

Manually setting dirty bits also fails to prevent PML activity if some
code path clears dirty bits, which can incur unnecessary VM-Exits.

In order to avoid this extra overhead, dynamically enable/disable PML
when dirty logging gets turned on/off for the first/last memslot.

Signed-off-by: Makarand Sonare <makarandsonare@google.com>
Co-developed-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20210213005015.1651772-12-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: x86: Further clarify the logic and comments for toggling log dirty
Sean Christopherson [Sat, 13 Feb 2021 00:50:11 +0000 (16:50 -0800)]
KVM: x86: Further clarify the logic and comments for toggling log dirty

Add a sanity check in kvm_mmu_slot_apply_flags to assert that the
LOG_DIRTY_PAGES flag is indeed being toggled, and explicitly rely on
that holding true when zapping collapsible SPTEs.  Manipulating the
CPU dirty log (PML) and write-protection also relies on this assertion,
but that's not obvious in the current code.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20210213005015.1651772-11-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: x86: Move MMU's PML logic to common code
Sean Christopherson [Sat, 13 Feb 2021 00:50:10 +0000 (16:50 -0800)]
KVM: x86: Move MMU's PML logic to common code

Drop the facade of KVM's PML logic being vendor specific and move the
bits that aren't truly VMX specific into common x86 code.  The MMU logic
for dealing with PML is tightly coupled to the feature and to VMX's
implementation, bouncing through kvm_x86_ops obfuscates the code without
providing any meaningful separation of concerns or encapsulation.

No functional change intended.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20210213005015.1651772-10-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: x86/mmu: Make dirty log size hook (PML) a value, not a function
Sean Christopherson [Sat, 13 Feb 2021 00:50:09 +0000 (16:50 -0800)]
KVM: x86/mmu: Make dirty log size hook (PML) a value, not a function

Store the vendor-specific dirty log size in a variable, there's no need
to wrap it in a function since the value is constant after
hardware_setup() runs.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20210213005015.1651772-9-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: x86/mmu: Expand on the comment in kvm_vcpu_ad_need_write_protect()
Sean Christopherson [Sat, 13 Feb 2021 00:50:08 +0000 (16:50 -0800)]
KVM: x86/mmu: Expand on the comment in kvm_vcpu_ad_need_write_protect()

Expand the comment about need to use write-protection for nested EPT
when PML is enabled to clarify that the tagging is a nop when PML is
_not_ enabled.  Without the clarification, omitting the PML check looks
wrong at first^Wfifth glance.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20210213005015.1651772-8-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: nVMX: Disable PML in hardware when running L2
Sean Christopherson [Sat, 13 Feb 2021 00:50:07 +0000 (16:50 -0800)]
KVM: nVMX: Disable PML in hardware when running L2

Unconditionally disable PML in vmcs02, KVM emulates PML purely in the
MMU, e.g. vmx_flush_pml_buffer() doesn't even try to copy the L2 GPAs
from vmcs02's buffer to vmcs12.  At best, enabling PML is a nop.  At
worst, it will cause vmx_flush_pml_buffer() to record bogus GFNs in the
dirty logs.

Initialize vmcs02.GUEST_PML_INDEX such that PML writes would trigger
VM-Exit if PML was somehow enabled, skip flushing the buffer for guest
mode since the index is bogus, and freak out if a PML full exit occurs
when L2 is active.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20210213005015.1651772-7-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: x86/mmu: Consult max mapping level when zapping collapsible SPTEs
Sean Christopherson [Sat, 13 Feb 2021 00:50:06 +0000 (16:50 -0800)]
KVM: x86/mmu: Consult max mapping level when zapping collapsible SPTEs

When zapping SPTEs in order to rebuild them as huge pages, use the new
helper that computes the max mapping level to detect whether or not a
SPTE should be zapped.  Doing so avoids zapping SPTEs that can't
possibly be rebuilt as huge pages, e.g. due to hardware constraints,
memslot alignment, etc...

This also avoids zapping SPTEs that are still large, e.g. if migration
was canceled before write-protected huge pages were shattered to enable
dirty logging.  Note, such pages are still write-protected at this time,
i.e. a page fault VM-Exit will still occur.  This will hopefully be
addressed in a future patch.

Sadly, TDP MMU loses its const on the memslot, but that's a pervasive
problem that's been around for quite some time.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20210213005015.1651772-6-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: x86/mmu: Pass the memslot to the rmap callbacks
Sean Christopherson [Sat, 13 Feb 2021 00:50:05 +0000 (16:50 -0800)]
KVM: x86/mmu: Pass the memslot to the rmap callbacks

Pass the memslot to the rmap callbacks, it will be used when zapping
collapsible SPTEs to verify the memslot is compatible with hugepages
before zapping its SPTEs.

No functional change intended.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20210213005015.1651772-5-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: x86/mmu: Split out max mapping level calculation to helper
Sean Christopherson [Sat, 13 Feb 2021 00:50:04 +0000 (16:50 -0800)]
KVM: x86/mmu: Split out max mapping level calculation to helper

Factor out the logic for determining the maximum mapping level given a
memslot and a gpa.  The helper will be used when zapping collapsible
SPTEs when disabling dirty logging, e.g. to avoid zapping SPTEs that
can't possibly be rebuilt as hugepages.

No functional change intended.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20210213005015.1651772-4-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: x86/mmu: Expand collapsible SPTE zap for TDP MMU to ZONE_DEVICE and HugeTLB...
Sean Christopherson [Sat, 13 Feb 2021 00:50:02 +0000 (16:50 -0800)]
KVM: x86/mmu: Expand collapsible SPTE zap for TDP MMU to ZONE_DEVICE and HugeTLB pages

Zap SPTEs that are backed by ZONE_DEVICE pages when zappings SPTEs to
rebuild them as huge pages in the TDP MMU.  ZONE_DEVICE huge pages are
managed differently than "regular" pages and are not compound pages.
Likewise, PageTransCompoundMap() will not detect HugeTLB, so switch
to PageCompound().

This matches the similar check in kvm_mmu_zap_collapsible_spte.

Cc: Ben Gardon <bgardon@google.com>
Fixes: 14881998566d ("kvm: x86/mmu: Support disabling dirty logging for the tdp MMU")
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20210213005015.1651772-2-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: nVMX: no need to undo inject_page_fault change on nested vmexit
Paolo Bonzini [Thu, 18 Feb 2021 12:19:55 +0000 (07:19 -0500)]
KVM: nVMX: no need to undo inject_page_fault change on nested vmexit

This is not needed because the tweak was done on the guest_mmu, while
nested_ept_uninit_mmu_context has just changed vcpu->arch.walk_mmu
back to the root_mmu.

Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: nSVM: fix running nested guests when npt=0
Paolo Bonzini [Thu, 18 Feb 2021 12:16:59 +0000 (07:16 -0500)]
KVM: nSVM: fix running nested guests when npt=0

In case of npt=0 on host, nSVM needs the same .inject_page_fault tweak
as VMX has, to make sure that shadow mmu faults are injected as vmexits.

It is not clear why this is needed at all, but for now keep the same
code as VMX and we'll fix it for both.

Based on a patch by Maxim Levitsky <mlevitsk@redhat.com>.

Fixes: 7c86663b68ba ("KVM: nSVM: inject exceptions via svm_check_nested_events")
Cc: stable@vger.kernel.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: nSVM: move nested vmrun tracepoint to enter_svm_guest_mode
Maxim Levitsky [Wed, 17 Feb 2021 14:57:13 +0000 (16:57 +0200)]
KVM: nSVM: move nested vmrun tracepoint to enter_svm_guest_mode

This way trace will capture all the nested mode entries
(including entries after migration, and from smm)

Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
Message-Id: <20210217145718.1217358-3-mlevitsk@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: VMX: read idt_vectoring_info a bit earlier
Maxim Levitsky [Wed, 17 Feb 2021 14:57:12 +0000 (16:57 +0200)]
KVM: VMX: read idt_vectoring_info a bit earlier

trace_kvm_exit prints this value (using vmx_get_exit_info)
so it makes sense to read it before the trace point.

Fixes: dcf068da7eb2 ("KVM: VMX: Introduce generic fastpath handler")

Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
Message-Id: <20210217145718.1217358-2-mlevitsk@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: VMX: Allow INVPCID in guest without PCID
Sean Christopherson [Fri, 12 Feb 2021 00:34:11 +0000 (16:34 -0800)]
KVM: VMX: Allow INVPCID in guest without PCID

Remove the restriction that prevents VMX from exposing INVPCID to the
guest without PCID also being exposed to the guest.  The justification of
the restriction is that INVPCID will #UD if it's disabled in the VMCS.
While that is a true statement, it's also true that RDTSCP will #UD if
it's disabled in the VMCS.  Neither of those things has any dependency
whatsoever on the guest being able to set CR4.PCIDE=1, which is what is
effectively allowed by exposing PCID to the guest.

Removing the bogus restriction aligns VMX with SVM, and also allows for
an interesting configuration.  INVPCID is that fastest way to do a global
TLB flush, e.g. see native_flush_tlb_global().  Allowing INVPCID without
PCID would let a guest use the expedited flush while also limiting the
number of ASIDs consumed by the guest.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20210212003411.1102677-4-seanjc@google.com>
Reviewed-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: x86: Advertise INVPCID by default
Sean Christopherson [Fri, 12 Feb 2021 00:34:10 +0000 (16:34 -0800)]
KVM: x86: Advertise INVPCID by default

Advertise INVPCID by default (if supported by the host kernel) instead
of having both SVM and VMX opt in.  INVPCID was opt in when it was a
VMX only feature so that KVM wouldn't prematurely advertise support
if/when it showed up in the kernel on AMD hardware.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20210212003411.1102677-3-seanjc@google.com>
Reviewed-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: SVM: Intercept INVPCID when it's disabled to inject #UD
Sean Christopherson [Fri, 12 Feb 2021 00:34:09 +0000 (16:34 -0800)]
KVM: SVM: Intercept INVPCID when it's disabled to inject #UD

Intercept INVPCID if it's disabled in the guest, even when using NPT,
as KVM needs to inject #UD in this case.

Fixes: 4407a797e941 ("KVM: SVM: Enable INVPCID feature on AMD")
Cc: Babu Moger <babu.moger@amd.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20210212003411.1102677-2-seanjc@google.com>
Reviewed-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoselftests: kvm: avoid uninitialized variable warning
Paolo Bonzini [Mon, 15 Feb 2021 16:42:01 +0000 (11:42 -0500)]
selftests: kvm: avoid uninitialized variable warning

The variable in practice will never be uninitialized, because the
loop will always go through at least one iteration.

In case it would not, make vcpu_get_cpuid report an assertion
failure.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoselftests: kvm: add hardware_disable test
Ignacio Alvarado [Sat, 13 Feb 2021 00:14:52 +0000 (00:14 +0000)]
selftests: kvm: add hardware_disable test

This test launches 512 VMs in serial and kills them after a random
amount of time.

The test was original written to exercise KVM user notifiers in
the context of1650b4ebc99d:
- KVM: Disable irq while unregistering user notifier
- https://lore.kernel.org/kvm/CACXrx53vkO=HKfwWwk+fVpvxcNjPrYmtDZ10qWxFvVX_PTGp3g@mail.gmail.com/

Recently, this test piqued my interest because it proved useful to
for AMD SNP in exercising the "in-use" pages, described in APM section
15.36.12, "Running SNP-Active Virtual Machines".

Signed-off-by: Ignacio Alvarado <ikalvarado@google.com>
Signed-off-by: Marc Orr <marcorr@google.com>
Message-Id: <20210213001452.1719001-1-marcorr@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoMerge tag 'kvmarm-5.12' of git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmar...
Paolo Bonzini [Fri, 12 Feb 2021 16:23:44 +0000 (11:23 -0500)]
Merge tag 'kvmarm-5.12' of git://git./linux/kernel/git/kvmarm/kvmarm into HEAD

KVM/arm64 updates for Linux 5.12

- Make the nVHE EL2 object relocatable, resulting in much more
  maintainable code
- Handle concurrent translation faults hitting the same page
  in a more elegant way
- Support for the standard TRNG hypervisor call
- A bunch of small PMU/Debug fixes
- Allow the disabling of symbol export from assembly code
- Simplification of the early init hypercall handling

3 years agoMerge branch 'kvm-arm64/pmu-debug-fixes-5.11' into kvmarm-master/next
Marc Zyngier [Fri, 12 Feb 2021 14:08:41 +0000 (14:08 +0000)]
Merge branch 'kvm-arm64/pmu-debug-fixes-5.11' into kvmarm-master/next

Signed-off-by: Marc Zyngier <maz@kernel.org>
3 years agoMerge branch 'kvm-arm64/rng-5.12' into kvmarm-master/next
Marc Zyngier [Fri, 12 Feb 2021 14:08:25 +0000 (14:08 +0000)]
Merge branch 'kvm-arm64/rng-5.12' into kvmarm-master/next

Signed-off-by: Marc Zyngier <maz@kernel.org>
3 years agoMerge branch 'kvm-arm64/hyp-reloc' into kvmarm-master/next
Marc Zyngier [Fri, 12 Feb 2021 14:08:18 +0000 (14:08 +0000)]
Merge branch 'kvm-arm64/hyp-reloc' into kvmarm-master/next

Signed-off-by: Marc Zyngier <maz@kernel.org>
3 years agoMerge branch 'kvm-arm64/concurrent-translation-fault' into kvmarm-master/next
Marc Zyngier [Fri, 12 Feb 2021 14:08:13 +0000 (14:08 +0000)]
Merge branch 'kvm-arm64/concurrent-translation-fault' into kvmarm-master/next

Signed-off-by: Marc Zyngier <maz@kernel.org>
3 years agoMerge branch 'kvm-arm64/misc-5.12' into kvmarm-master/next
Marc Zyngier [Fri, 12 Feb 2021 14:08:07 +0000 (14:08 +0000)]
Merge branch 'kvm-arm64/misc-5.12' into kvmarm-master/next

Signed-off-by: Marc Zyngier <maz@kernel.org>
3 years agoMerge tag 'kvmarm-fixes-5.11-2' into kvmarm-master/next
Marc Zyngier [Fri, 12 Feb 2021 14:07:39 +0000 (14:07 +0000)]
Merge tag 'kvmarm-fixes-5.11-2' into kvmarm-master/next

KVM/arm64 fixes for 5.11, take #2

- Don't allow tagged pointers to point to memslots
- Filter out ARMv8.1+ PMU events on v8.0 hardware
- Hide PMU registers from userspace when no PMU is configured
- More PMU cleanups
- Don't try to handle broken PSCI firmware
- More sys_reg() to reg_to_encoding() conversions

Signed-off-by: Marc Zyngier <maz@kernel.org>
3 years agoKVM: x86/xen: Explicitly pad struct compat_vcpu_info to 64 bytes
Sean Christopherson [Wed, 10 Feb 2021 18:26:09 +0000 (10:26 -0800)]
KVM: x86/xen: Explicitly pad struct compat_vcpu_info to 64 bytes

Add a 2 byte pad to struct compat_vcpu_info so that the sum size of its
fields is actually 64 bytes.  The effective size without the padding is
also 64 bytes due to the compiler aligning evtchn_pending_sel to a 4-byte
boundary, but depending on compiler alignment is subtle and unnecessary.

Opportunistically replace spaces with tables in the other fields.

Cc: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20210210182609.435200-6-seanjc@google.com>
Reviewed-by: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: selftests: Don't bother mapping GVA for Xen shinfo test
Sean Christopherson [Wed, 10 Feb 2021 18:26:08 +0000 (10:26 -0800)]
KVM: selftests: Don't bother mapping GVA for Xen shinfo test

Don't bother mapping the Xen shinfo pages into the guest, they don't need
to be accessed using the GVAs and passing a define with "GPA" in the name
to addr_gva2hpa() is confusing.

Cc: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20210210182609.435200-5-seanjc@google.com>
Reviewed-by: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: selftests: Fix hex vs. decimal snafu in Xen test
Sean Christopherson [Wed, 10 Feb 2021 18:26:07 +0000 (10:26 -0800)]
KVM: selftests: Fix hex vs. decimal snafu in Xen test

The Xen shinfo selftest uses '40' when setting the GPA of the vCPU info
struct, but checks for the result at '0x40'.  Arbitrarily use the hex
version to resolve the bug.

Fixes: 8d4e7e80838f ("KVM: x86: declare Xen HVM shared info capability and add test case")
Cc: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20210210182609.435200-4-seanjc@google.com>
Reviewed-by: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: selftests: Fix size of memslots created by Xen tests
Sean Christopherson [Wed, 10 Feb 2021 18:26:06 +0000 (10:26 -0800)]
KVM: selftests: Fix size of memslots created by Xen tests

For better or worse, the memslot APIs take the number of pages, not the
size in bytes.  The Xen tests need 2 pages, not 8192 pages.

Fixes: 8d4e7e80838f ("KVM: x86: declare Xen HVM shared info capability and add test case")
Cc: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20210210182609.435200-3-seanjc@google.com>
Reviewed-by: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: selftests: Ignore recently added Xen tests' build output
Sean Christopherson [Wed, 10 Feb 2021 18:26:05 +0000 (10:26 -0800)]
KVM: selftests: Ignore recently added Xen tests' build output

Add the new Xen test binaries to KVM selftest's .gitnore.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20210210182609.435200-2-seanjc@google.com>
Reviewed-by: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: selftests: Add missing header file needed by xAPIC IPI tests
Peter Shier [Wed, 10 Feb 2021 01:17:47 +0000 (17:17 -0800)]
KVM: selftests: Add missing header file needed by xAPIC IPI tests

Fixes: 678e90a349a4 ("KVM: selftests: Test IPI to halted vCPU in xAPIC while backing page moves")
Cc: Andrew Jones <drjones@redhat.com>
Cc: Jim Mattson <jmattson@google.com>
Signed-off-by: Peter Shier <pshier@google.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20210210011747.240913-1-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: selftests: Add operand to vmsave/vmload/vmrun in svm.c
Ricardo Koller [Wed, 10 Feb 2021 03:17:19 +0000 (03:17 +0000)]
KVM: selftests: Add operand to vmsave/vmload/vmrun in svm.c

Building the KVM selftests with LLVM's integrated assembler fails with:

  $ CFLAGS=-fintegrated-as make -C tools/testing/selftests/kvm CC=clang
  lib/x86_64/svm.c:77:16: error: too few operands for instruction
          asm volatile ("vmsave\n\t" : : "a" (vmcb_gpa) : "memory");
                        ^
  <inline asm>:1:2: note: instantiated into assembly here
          vmsave
          ^
  lib/x86_64/svm.c:134:3: error: too few operands for instruction
                  "vmload\n\t"
                  ^
  <inline asm>:1:2: note: instantiated into assembly here
          vmload
          ^
This is because LLVM IAS does not currently support calling vmsave,
vmload, or vmload without an explicit %rax operand.

Add an explicit operand to vmsave, vmload, and vmrum in svm.c. Fixing
this was suggested by Sean Christopherson.

Tested: building without this error in clang 11. The following patch
(not queued yet) needs to be applied to solve the other remaining error:
"selftests: kvm: remove reassignment of non-absolute variables".

Suggested-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/kvm/X+Df2oQczVBmwEzi@google.com/
Reviewed-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Ricardo Koller <ricarkol@google.com>
Message-Id: <20210210031719.769837-1-ricarkol@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: SVM: Make symbol 'svm_gp_erratum_intercept' static
Wei Yongjun [Wed, 10 Feb 2021 07:59:58 +0000 (07:59 +0000)]
KVM: SVM: Make symbol 'svm_gp_erratum_intercept' static

The sparse tool complains as follows:

arch/x86/kvm/svm/svm.c:204:6: warning:
 symbol 'svm_gp_erratum_intercept' was not declared. Should it be static?

This symbol is not used outside of svm.c, so this
commit marks it static.

Fixes: 82a11e9c6fa2b ("KVM: SVM: Add emulation support for #GP triggered by SVM instructions")
Reported-by: Hulk Robot <hulkci@huawei.com>
Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com>
Message-Id: <20210210075958.1096317-1-weiyongjun1@huawei.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoMerge tag 'kvm-ppc-next-5.12-1' of git://git.kernel.org/pub/scm/linux/kernel/git...
Paolo Bonzini [Thu, 11 Feb 2021 13:00:04 +0000 (08:00 -0500)]
Merge tag 'kvm-ppc-next-5.12-1' of git://git./linux/kernel/git/paulus/powerpc into HEAD

PPC KVM update for 5.12

- Support for second data watchpoint on POWER10, from Ravi Bangoria
- Remove some complex workarounds for buggy early versions of POWER9
- Guest entry/exit fixes from Nick Piggin and Fabiano Rosas

3 years agolocking/arch: Move qrwlock.h include after qspinlock.h
Waiman Long [Wed, 10 Feb 2021 18:16:31 +0000 (13:16 -0500)]
locking/arch: Move qrwlock.h include after qspinlock.h

include/asm-generic/qrwlock.h was trying to get arch_spin_is_locked via
asm-generic/qspinlock.h.  However, this does not work because architectures
might be using queued rwlocks but not queued spinlocks (csky), or because they
might be defining their own queued_* macros before including asm/qspinlock.h.

To fix this, ensure that asm/spinlock.h always includes qrwlock.h after
defining arch_spin_is_locked (either directly for csky, or via
asm/qspinlock.h for other architectures).  The only inclusion elsewhere
is in kernel/locking/qrwlock.c.  That one is really unnecessary because
the file is only compiled in SMP configurations (config QUEUED_RWLOCKS
depends on SMP) and in that case linux/spinlock.h already includes
asm/qrwlock.h if needed, via asm/spinlock.h.

Reported-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Waiman Long <longman@redhat.com>
Fixes: 26128cb6c7e6 ("locking/rwlocks: Add contention detection for rwlocks")
Tested-by: Guenter Roeck <linux@roeck-us.net>
Reviewed-by: Ben Gardon <bgardon@google.com>
[Add arch/sparc and kernel/locking parts per discussion with Waiman. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: PPC: Book3S HV: Fix host radix SLB optimisation with hash guests
Nicholas Piggin [Thu, 11 Feb 2021 05:38:30 +0000 (15:38 +1000)]
KVM: PPC: Book3S HV: Fix host radix SLB optimisation with hash guests

Commit 68ad28a4cdd4 ("KVM: PPC: Book3S HV: Fix radix guest SLB side
channel") incorrectly removed the radix host instruction patch to skip
re-loading the host SLB entries when exiting from a hash
guest. Restore it.

Fixes: 68ad28a4cdd4 ("KVM: PPC: Book3S HV: Fix radix guest SLB side channel")
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
3 years agoKVM: PPC: Book3S HV: Ensure radix guest has no SLB entries
Paul Mackerras [Thu, 11 Feb 2021 06:09:45 +0000 (17:09 +1100)]
KVM: PPC: Book3S HV: Ensure radix guest has no SLB entries

Commit 68ad28a4cdd4 ("KVM: PPC: Book3S HV: Fix radix guest SLB side
channel") changed the older guest entry path, with the side effect
that vcpu->arch.slb_max no longer gets cleared for a radix guest.
This means that a HPT guest which loads some SLB entries, switches to
radix mode, runs the guest using the old guest entry path (e.g.,
because the indep_threads_mode module parameter has been set to
false), and then switches back to HPT mode would now see the old SLB
entries being present, whereas previously it would have seen no SLB
entries.

To avoid changing guest-visible behaviour, this adds a store
instruction to clear vcpu->arch.slb_max for a radix guest using the
old guest entry path.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
3 years agoKVM: PPC: Don't always report hash MMU capability for P9 < DD2.2
Fabiano Rosas [Fri, 5 Feb 2021 16:41:54 +0000 (13:41 -0300)]
KVM: PPC: Don't always report hash MMU capability for P9 < DD2.2

These machines don't support running both MMU types at the same time,
so remove the KVM_CAP_PPC_MMU_HASH_V3 capability when the host is
using Radix MMU.

[paulus@ozlabs.org - added defensive check on
 kvmppc_hv_ops->hash_v3_possible]

Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
3 years agoKVM: PPC: Book3S HV: Save and restore FSCR in the P9 path
Fabiano Rosas [Thu, 4 Feb 2021 20:05:17 +0000 (17:05 -0300)]
KVM: PPC: Book3S HV: Save and restore FSCR in the P9 path

The Facility Status and Control Register is a privileged SPR that
defines the availability of some features in problem state. Since it
can be written by the guest, we must restore it to the previous host
value after guest exit.

This restoration is currently done by taking the value from
current->thread.fscr, which in the P9 path is not enough anymore
because the guest could context switch the QEMU thread, causing the
guest-current value to be saved into the thread struct.

The above situation manifested when running a QEMU linked against a
libc with System Call Vectored support, which causes scv
instructions to be run by QEMU early during the guest boot (during
SLOF), at which point the FSCR is 0 due to guest entry. After a few
scv calls (1 to a couple hundred), the context switching happens and
the QEMU thread runs with the guest value, resulting in a Facility
Unavailable interrupt.

This patch saves and restores the host value of FSCR in the inner
guest entry loop in a way independent of current->thread.fscr. The old
way of doing it is still kept in place because it works for the old
entry path.

Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
3 years agoKVM: PPC: remove unneeded semicolon
Yang Li [Tue, 2 Feb 2021 03:30:53 +0000 (11:30 +0800)]
KVM: PPC: remove unneeded semicolon

Eliminate the following coccicheck warning:
./arch/powerpc/kvm/booke.c:701:2-3: Unneeded semicolon

Reported-by: Abaci Robot <abaci@linux.alibaba.com>
Signed-off-by: Yang Li <yang.lee@linux.alibaba.com>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
3 years agoKVM: PPC: Book3S HV: Use POWER9 SLBIA IH=6 variant to clear SLB
Nicholas Piggin [Mon, 18 Jan 2021 06:28:09 +0000 (16:28 +1000)]
KVM: PPC: Book3S HV: Use POWER9 SLBIA IH=6 variant to clear SLB

IH=6 may preserve hypervisor real-mode ERAT entries and is the
recommended SLBIA hint for switching partitions.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
3 years agoKVM: PPC: Book3S HV: No need to clear radix host SLB before loading HPT guest
Nicholas Piggin [Mon, 18 Jan 2021 06:28:08 +0000 (16:28 +1000)]
KVM: PPC: Book3S HV: No need to clear radix host SLB before loading HPT guest

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
3 years agoKVM: PPC: Book3S HV: Fix radix guest SLB side channel
Nicholas Piggin [Mon, 18 Jan 2021 06:28:07 +0000 (16:28 +1000)]
KVM: PPC: Book3S HV: Fix radix guest SLB side channel

The slbmte instruction is legal in radix mode, including radix guest
mode. This means radix guests can load the SLB with arbitrary data.

KVM host does not clear the SLB when exiting a guest if it was a
radix guest, which would allow a rogue radix guest to use the SLB as
a side channel to communicate with other guests.

Fix this by ensuring the SLB is cleared when coming out of a radix
guest. Only the first 4 entries are a concern, because radix guests
always run with LPCR[UPRT]=1, which limits the reach of slbmte. slbia
is not used (except in a non-performance-critical path) because it
can clear cached translations.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
3 years agoKVM: PPC: Book3S HV: Remove support for running HPT guest on RPT host without mixed...
Nicholas Piggin [Mon, 18 Jan 2021 06:28:06 +0000 (16:28 +1000)]
KVM: PPC: Book3S HV: Remove support for running HPT guest on RPT host without mixed mode support

This reverts much of commit c01015091a770 ("KVM: PPC: Book3S HV: Run HPT
guests on POWER9 radix hosts"), which was required to run HPT guests on
RPT hosts on early POWER9 CPUs without support for "mixed mode", which
meant the host could not run with MMU on while guests were running.

This code has some corner case bugs, e.g., when the guest hits a machine
check or HMI the primary locks up waiting for secondaries to switch LPCR
to host, which they never do. This could all be fixed in software, but
most CPUs in production have mixed mode support, and those that don't
are believed to be all in installations that don't use this capability.
So simplify things and remove support.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Tested-by: Fabiano Rosas <farosas@linux.ibm.com>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
3 years agoKVM: PPC: Book3S HV: Introduce new capability for 2nd DAWR
Ravi Bangoria [Wed, 16 Dec 2020 10:42:19 +0000 (16:12 +0530)]
KVM: PPC: Book3S HV: Introduce new capability for 2nd DAWR

Introduce KVM_CAP_PPC_DAWR1 which can be used by QEMU to query whether
KVM supports 2nd DAWR or not. The capability is by default disabled
even when the underlying CPU supports 2nd DAWR. QEMU needs to check
and enable it manually to use the feature.

Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
3 years agoKVM: PPC: Book3S HV: Add infrastructure to support 2nd DAWR
Ravi Bangoria [Wed, 16 Dec 2020 10:42:18 +0000 (16:12 +0530)]
KVM: PPC: Book3S HV: Add infrastructure to support 2nd DAWR

KVM code assumes single DAWR everywhere. Add code to support 2nd DAWR.
DAWR is a hypervisor resource and thus H_SET_MODE hcall is used to set/
unset it. Introduce new case H_SET_MODE_RESOURCE_SET_DAWR1 for 2nd DAWR.
Also, KVM will support 2nd DAWR only if CPU_FTR_DAWR1 is set.

Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
3 years agoKVM: PPC: Book3S HV: Rename current DAWR macros and variables
Ravi Bangoria [Wed, 16 Dec 2020 10:42:17 +0000 (16:12 +0530)]
KVM: PPC: Book3S HV: Rename current DAWR macros and variables

Power10 is introducing a second DAWR (Data Address Watchpoint
Register). Use real register names (with suffix 0) from ISA for
current macros and variables used by kvm.  One exception is
KVM_REG_PPC_DAWR.  Keep it as it is because it's uapi so changing it
will break userspace.

Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
3 years agoKVM: PPC: Book3S HV: Allow nested guest creation when L0 hv_guest_state > L1
Ravi Bangoria [Wed, 16 Dec 2020 10:42:16 +0000 (16:12 +0530)]
KVM: PPC: Book3S HV: Allow nested guest creation when L0 hv_guest_state > L1

On powerpc, L1 hypervisor takes help of L0 using H_ENTER_NESTED
hcall to load L2 guest state in cpu. L1 hypervisor prepares the
L2 state in struct hv_guest_state and passes a pointer to it via
hcall. Using that pointer, L0 reads/writes that state directly
from/to L1 memory. Thus L0 must be aware of hv_guest_state layout
of L1. Currently it uses version field to achieve this. i.e. If
L0 hv_guest_state.version != L1 hv_guest_state.version, L0 won't
allow nested kvm guest.

This restriction can be loosened up a bit. L0 can be taught to
understand older layout of hv_guest_state, if we restrict the
new members to be added only at the end, i.e. we can allow
nested guest even when L0 hv_guest_state.version > L1
hv_guest_state.version. Though, the other way around is not
possible.

Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com>
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
3 years agoDocumentation: kvm: fix warning
Paolo Bonzini [Tue, 9 Feb 2021 10:07:45 +0000 (05:07 -0500)]
Documentation: kvm: fix warning

Documentation/virt/kvm/api.rst:4927: WARNING: Title underline too short.

4.130 KVM_XEN_VCPU_GET_ATTR
--------------------------

Fixes: e1f68169a4f8 ("KVM: Add documentation for Xen hypercall and shared_info updates")
Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: x86/xen: Allow reset of Xen attributes
David Woodhouse [Mon, 8 Feb 2021 23:23:25 +0000 (23:23 +0000)]
KVM: x86/xen: Allow reset of Xen attributes

In order to support Xen SHUTDOWN_soft_reset (for guest kexec, etc.) the
VMM needs to be able to tear everything down and return the Xen features
to a clean slate.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Message-Id: <20210208232326.1830370-1-dwmw2@infradead.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: x86/mmu: Make HVA handler retpoline-friendly
Maciej S. Szmigiero [Mon, 8 Feb 2021 18:51:32 +0000 (19:51 +0100)]
KVM: x86/mmu: Make HVA handler retpoline-friendly

When retpolines are enabled they have high overhead in the inner loop
inside kvm_handle_hva_range() that iterates over the provided memory area.

Let's mark this function and its TDP MMU equivalent __always_inline so
compiler will be able to change the call to the actual handler function
inside each of them into a direct one.

This significantly improves performance on the unmap test on the existing
kernel memslot code (tested on a Xeon 8167M machine):
30 slots in use:
Test       Before   After     Improvement
Unmap      0.0353s  0.0334s   5%
Unmap 2M   0.00104s 0.000407s 61%

509 slots in use:
Test       Before   After     Improvement
Unmap      0.0742s  0.0740s   None
Unmap 2M   0.00221s 0.00159s  28%

Looks like having an indirect call in these functions (and, so, a
retpoline) might have interfered with unrolling of the whole loop in the
CPU.

Signed-off-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
Message-Id: <732d3fe9eb68aa08402a638ab0309199fa89ae56.1612810129.git.maciej.szmigiero@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: x86: hyper-v: Drop hv_vcpu_to_vcpu() helper
Vitaly Kuznetsov [Tue, 26 Jan 2021 13:48:16 +0000 (14:48 +0100)]
KVM: x86: hyper-v: Drop hv_vcpu_to_vcpu() helper

hv_vcpu_to_vcpu() helper is only used by other helpers and
is not very complex, we can drop it without much regret.

Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20210126134816.1880136-16-vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: x86: hyper-v: Allocate Hyper-V context lazily
Vitaly Kuznetsov [Tue, 26 Jan 2021 13:48:15 +0000 (14:48 +0100)]
KVM: x86: hyper-v: Allocate Hyper-V context lazily

Hyper-V context is only needed for guests which use Hyper-V emulation in
KVM (e.g. Windows/Hyper-V guests) so we don't actually need to allocate
it in kvm_arch_vcpu_create(), we can postpone the action until Hyper-V
specific MSRs are accessed or SynIC is enabled.

Once allocated, let's keep the context alive for the lifetime of the vCPU
as an attempt to free it would require additional synchronization with
other vCPUs and normally it is not supposed to happen.

Note, Hyper-V style hypercall enablement is done by writing to
HV_X64_MSR_GUEST_OS_ID so we don't need to worry about allocating Hyper-V
context from kvm_hv_hypercall().

Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20210126134816.1880136-15-vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: x86: hyper-v: Make Hyper-V emulation enablement conditional
Vitaly Kuznetsov [Tue, 26 Jan 2021 13:48:14 +0000 (14:48 +0100)]
KVM: x86: hyper-v: Make Hyper-V emulation enablement conditional

Hyper-V emulation is enabled in KVM unconditionally. This is bad at least
from security standpoint as it is an extra attack surface. Ideally, there
should be a per-VM capability explicitly enabled by VMM but currently it
is not the case and we can't mandate one without breaking backwards
compatibility. We can, however, check guest visible CPUIDs and only enable
Hyper-V emulation when "Hv#1" interface was exposed in
HYPERV_CPUID_INTERFACE.

Note, VMMs are free to act in any sequence they like, e.g. they can try
to set MSRs first and CPUIDs later so we still need to allow the host
to read/write Hyper-V specific MSRs unconditionally.

Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20210126134816.1880136-14-vkuznets@redhat.com>
[Add selftest vcpu_set_hv_cpuid API to avoid breaking xen_vmcall_test. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: x86: hyper-v: Allocate 'struct kvm_vcpu_hv' dynamically
Vitaly Kuznetsov [Tue, 26 Jan 2021 13:48:13 +0000 (14:48 +0100)]
KVM: x86: hyper-v: Allocate 'struct kvm_vcpu_hv' dynamically

Hyper-V context is only needed for guests which use Hyper-V emulation in
KVM (e.g. Windows/Hyper-V guests). 'struct kvm_vcpu_hv' is, however, quite
big, it accounts for more than 1/4 of the total 'struct kvm_vcpu_arch'
which is also quite big already. This all looks like a waste.

Allocate 'struct kvm_vcpu_hv' dynamically. This patch does not bring any
(intentional) functional change as we still allocate the context
unconditionally but it paves the way to doing that only when needed.

Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20210126134816.1880136-13-vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: x86: hyper-v: Prepare to meet unallocated Hyper-V context
Vitaly Kuznetsov [Tue, 26 Jan 2021 13:48:12 +0000 (14:48 +0100)]
KVM: x86: hyper-v: Prepare to meet unallocated Hyper-V context

Currently, Hyper-V context is part of 'struct kvm_vcpu_arch' and is always
available. As a preparation to allocating it dynamically, check that it is
not NULL at call sites which can normally proceed without it i.e. the
behavior is identical to the situation when Hyper-V emulation is not being
used by the guest.

When Hyper-V context for a particular vCPU is not allocated, we may still
need to get 'vp_index' from there. E.g. in a hypothetical situation when
Hyper-V emulation was enabled on one CPU and wasn't on another, Hyper-V
style send-IPI hypercall may still be used. Luckily, vp_index is always
initialized to kvm_vcpu_get_idx() and can only be changed when Hyper-V
context is present. Introduce kvm_hv_get_vpindex() helper for
simplification.

No functional change intended.

Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20210126134816.1880136-12-vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: x86: hyper-v: Always use to_hv_vcpu() accessor to get to 'struct kvm_vcpu_hv'
Vitaly Kuznetsov [Tue, 26 Jan 2021 13:48:11 +0000 (14:48 +0100)]
KVM: x86: hyper-v: Always use to_hv_vcpu() accessor to get to 'struct kvm_vcpu_hv'

As a preparation to allocating Hyper-V context dynamically, make it clear
who's the user of the said context.

No functional change intended.

Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20210126134816.1880136-11-vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: x86: hyper-v: Stop shadowing global 'current_vcpu' variable
Vitaly Kuznetsov [Tue, 26 Jan 2021 13:48:10 +0000 (14:48 +0100)]
KVM: x86: hyper-v: Stop shadowing global 'current_vcpu' variable

'current_vcpu' variable in KVM is a per-cpu pointer to the currently
scheduled vcpu. kvm_hv_flush_tlb()/kvm_hv_send_ipi() functions used
to have local 'vcpu' variable to iterate over vCPUs but it's gone
now and there's no need to use anything but the standard 'vcpu' as
an argument.

Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20210126134816.1880136-10-vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: x86: hyper-v: Introduce to_kvm_hv() helper
Vitaly Kuznetsov [Tue, 26 Jan 2021 13:48:09 +0000 (14:48 +0100)]
KVM: x86: hyper-v: Introduce to_kvm_hv() helper

Spelling '&kvm->arch.hyperv' correctly is hard. Also, this makes the code
more consistent with vmx/svm where to_kvm_vmx()/to_kvm_svm() are already
being used.

Opportunistically change kvm_hv_msr_{get,set}_crash_{data,ctl}() and
kvm_hv_msr_set_crash_data() to take 'kvm' instead of 'vcpu' as these
MSRs are partition wide.

No functional change intended.

Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20210126134816.1880136-9-vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: x86: hyper-v: Rename vcpu_to_hv_syndbg() to to_hv_syndbg()
Vitaly Kuznetsov [Tue, 26 Jan 2021 13:48:08 +0000 (14:48 +0100)]
KVM: x86: hyper-v: Rename vcpu_to_hv_syndbg() to to_hv_syndbg()

vcpu_to_hv_syndbg()'s argument is  always 'vcpu' so there's no need to have
an additional prefix. Also, this makes the code more consistent with
vmx/svm where to_vmx()/to_svm() are being used.

No functional change intended.

Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20210126134816.1880136-8-vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: x86: hyper-v: Rename vcpu_to_stimer()/stimer_to_vcpu()
Vitaly Kuznetsov [Tue, 26 Jan 2021 13:48:07 +0000 (14:48 +0100)]
KVM: x86: hyper-v: Rename vcpu_to_stimer()/stimer_to_vcpu()

vcpu_to_stimers()'s argument is almost always 'vcpu' so there's no need to
have an additional prefix. Also, this makes the naming more consistent with
to_hv_vcpu()/to_hv_synic().

Rename stimer_to_vcpu() to hv_stimer_to_vcpu() for consitency.

No functional change intended.

Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20210126134816.1880136-7-vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 years agoKVM: x86: hyper-v: Rename vcpu_to_synic()/synic_to_vcpu()
Vitaly Kuznetsov [Tue, 26 Jan 2021 13:48:06 +0000 (14:48 +0100)]
KVM: x86: hyper-v: Rename vcpu_to_synic()/synic_to_vcpu()

vcpu_to_synic()'s argument is almost always 'vcpu' so there's no need to
have an additional prefix. Also, as this is used outside of hyper-v
emulation code, add '_hv_' part to make it clear what this s. This makes
the naming more consistent with to_hv_vcpu().

Rename synic_to_vcpu() to hv_synic_to_vcpu() for consistency.

No functional change intended.

Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20210126134816.1880136-6-vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>