platform/kernel/linux-starfive.git
6 years agoKVM: vmx: compute need to reload FS/GS/LDT on demand
Sean Christopherson [Mon, 23 Jul 2018 19:32:46 +0000 (12:32 -0700)]
KVM: vmx: compute need to reload FS/GS/LDT on demand

Remove fs_reload_needed and gs_ldt_reload_needed from host_state
and instead compute whether we need to reload various state at
the time we actually do the reload.  The state that is tracked
by the *_reload_needed variables is not any more volatile than
the trackers themselves.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoKVM: nVMX: remove a misleading comment regarding vmcs02 fields
Sean Christopherson [Mon, 23 Jul 2018 19:32:45 +0000 (12:32 -0700)]
KVM: nVMX: remove a misleading comment regarding vmcs02 fields

prepare_vmcs02() has an odd comment that says certain fields are
"not in vmcs02".  AFAICT the intent of the comment is to document
that various VMCS fields are not handled by prepare_vmcs02(),
e.g. HOST_{FS,GS}_{BASE,SELECTOR}.  While technically true, the
comment is misleading, e.g. it can lead the reader to think that
KVM never writes those fields to vmcs02.

Remove the comment altogether as the handling of FS and GS is
not specific to nested VMX, and GUEST_PML_INDEX has been written
by prepare_vmcs02() since commit "4e59516a12a6 (kvm: vmx: ensure
VMCS is current while enabling PML)"

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoKVM: vmx: rename __vmx_load_host_state() and vmx_save_host_state()
Sean Christopherson [Mon, 23 Jul 2018 19:32:44 +0000 (12:32 -0700)]
KVM: vmx: rename __vmx_load_host_state() and vmx_save_host_state()

Now that the vmx_load_host_state() wrapper is gone, i.e. the only
time we call the core functions is when we're actually about to
switch between guest/host, rename the functions that handle lazy
state switching to vmx_prepare_switch_to_{guest,host}_state() to
better document the full extent of their functionality.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoKVM: vmx: add dedicated utility to access guest's kernel_gs_base
Sean Christopherson [Mon, 23 Jul 2018 19:32:43 +0000 (12:32 -0700)]
KVM: vmx: add dedicated utility to access guest's kernel_gs_base

When lazy save/restore of MSR_KERNEL_GS_BASE was introduced[1], the
MSR was intercepted in all modes and was only restored for the host
when the guest is in 64-bit mode.  So at the time, going through the
full host restore prior to accessing MSR_KERNEL_GS_BASE was necessary
to load host state and was not a significant waste of cycles.

Later, MSR_KERNEL_GS_BASE interception was disabled for a 64-bit
guest[2], and then unconditionally saved/restored for the host[3].
As a result, loading full host state is overkill for accesses to
MSR_KERNEL_GS_BASE, and completely unnecessary when the guest is
not in 64-bit mode.

Add a dedicated utility to read/write the guest's MSR_KERNEL_GS_BASE
(outside of the save/restore flow) to minimize the overhead incurred
when accessing the MSR.  When setting EFER, only decache the MSR if
the new EFER will disable long mode.

Removing out-of-band usage of vmx_load_host_state() also eliminates,
or at least reduces, potential corner cases in its usage, which in
turn will (hopefuly) make it easier to reason about future changes
to the save/restore flow, e.g. optimization of saving host state.

[1] commit 44ea2b1758d8 ("KVM: VMX: Move MSR_KERNEL_GS_BASE out of the vmx
                                    autoload msr area")
[2] commit 5897297bc228 ("KVM: VMX: Don't intercept MSR_KERNEL_GS_BASE")
[3] commit c8770e7ba63b ("KVM: VMX: Fix host userspace gsbase corruption")

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoKVM: vmx: track host_state.loaded using a loaded_vmcs pointer
Sean Christopherson [Mon, 23 Jul 2018 19:32:42 +0000 (12:32 -0700)]
KVM: vmx: track host_state.loaded using a loaded_vmcs pointer

Using 'struct loaded_vmcs*' to track whether the CPU registers
contain host or guest state kills two birds with one stone.

  1. The (effective) boolean host_state.loaded is poorly named.
     It does not track whether or not host state is loaded into
     the CPU registers (which most readers would expect), but
     rather tracks if host state has been saved AND guest state
     is loaded.

  2. Using a loaded_vmcs pointer provides a more robust framework
     for the optimized guest/host state switching, especially when
     consideration per-VMCS enhancements.  To that end, WARN_ONCE
     if we try to switch to host state with a different VMCS than
     was last used to save host state.

Resolve an occurrence of the new WARN by setting loaded_vmcs after
the call to vmx_vcpu_put() in vmx_switch_vmcs().

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoKVM: vmx: refactor segmentation code in vmx_save_host_state()
Sean Christopherson [Mon, 23 Jul 2018 19:32:41 +0000 (12:32 -0700)]
KVM: vmx: refactor segmentation code in vmx_save_host_state()

Use local variables in vmx_save_host_state() to temporarily track
the selector and base values for FS and GS, and reorganize the
code so that the 64-bit vs 32-bit portions are contained within
a single #ifdef.  This refactoring paves the way for future patches
to modify the updating of VMCS state with minimal changes to the
code, and (hopefully) simplifies resolving a likely conflict with
another in-flight patch[1] by being the whipping boy for future
patches.

[1] https://www.spinics.net/lists/kvm/msg171647.html

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agokvm: nVMX: Fix fault priority for VMX operations
Jim Mattson [Fri, 27 Jul 2018 20:44:45 +0000 (13:44 -0700)]
kvm: nVMX: Fix fault priority for VMX operations

When checking emulated VMX instructions for faults, the #UD for "IF
(not in VMX operation)" should take precedence over the #GP for "ELSIF
CPL > 0."

Suggested-by: Eric Northup <digitaleric@google.com>
Signed-off-by: Jim Mattson <jmattson@google.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agokvm: nVMX: Fix fault vector for VMX operation at CPL > 0
Jim Mattson [Fri, 27 Jul 2018 16:18:50 +0000 (09:18 -0700)]
kvm: nVMX: Fix fault vector for VMX operation at CPL > 0

The fault that should be raised for a privilege level violation is #GP
rather than #UD.

Fixes: 727ba748e110b4 ("kvm: nVMX: Enforce cpl=0 for VMX instructions")
Signed-off-by: Jim Mattson <jmattson@google.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoKVM: try __get_user_pages_fast even if not in atomic context
Paolo Bonzini [Fri, 27 Jul 2018 15:44:41 +0000 (17:44 +0200)]
KVM: try __get_user_pages_fast even if not in atomic context

We are currently cutting hva_to_pfn_fast short if we do not want an
immediate exit, which is represented by !async && !atomic.  However,
this is unnecessary, and __get_user_pages_fast is *much* faster
because the regular get_user_pages takes pmd_lock/pte_lock.
In fact, when many CPUs take a nested vmexit at the same time
the contention on those locks is visible, and this patch removes
about 25% (compared to 4.18) from vmexit.flat on a 16 vCPU
nested guest.

Suggested-by: Andrea Arcangeli <aarcange@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoKVM: vmx: Add tlb_remote_flush callback support
Tianyu Lan [Thu, 19 Jul 2018 08:40:23 +0000 (08:40 +0000)]
KVM: vmx: Add tlb_remote_flush callback support

Register tlb_remote_flush callback for vmx when hyperv capability of
nested guest mapping flush is detected. The interface can help to
reduce overhead when flush ept table among vcpus for nested VM. The
tradition way is to send IPIs to all affected vcpus and executes
INVEPT on each vcpus. It will trigger several vmexits for IPI
and INVEPT emulation. Hyper-V provides such hypercall to do
flush for all vcpus and call the hypercall when all ept table
pointers of single VM are same.

Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoKVM: x86: Add tlb remote flush callback in kvm_x86_ops.
Tianyu Lan [Thu, 19 Jul 2018 08:40:17 +0000 (08:40 +0000)]
KVM: x86: Add tlb remote flush callback in kvm_x86_ops.

This patch is to provide a way for platforms to register hv tlb remote
flush callback and this helps to optimize operation of tlb flush
among vcpus for nested virtualization case.

Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoX86/Hyper-V: Add hyperv_nested_flush_guest_mapping ftrace support
Tianyu Lan [Thu, 19 Jul 2018 08:40:12 +0000 (08:40 +0000)]
X86/Hyper-V: Add hyperv_nested_flush_guest_mapping ftrace support

This patch is to add hyperv_nested_flush_guest_mapping support to trace
hvFlushGuestPhysicalAddressSpace hypercall.

Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
Acked-by: K. Y. Srinivasan <kys@microsoft.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoX86/Hyper-V: Add flush HvFlushGuestPhysicalAddressSpace hypercall support
Tianyu Lan [Thu, 19 Jul 2018 08:40:06 +0000 (08:40 +0000)]
X86/Hyper-V: Add flush HvFlushGuestPhysicalAddressSpace hypercall support

Hyper-V supports a pv hypercall HvFlushGuestPhysicalAddressSpace to
flush nested VM address space mapping in l1 hypervisor and it's to
reduce overhead of flushing ept tlb among vcpus. This patch is to
implement it.

Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
Acked-by: K. Y. Srinivasan <kys@microsoft.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agox86/kvm: Don't use pvqspinlock code if only 1 vCPU
Waiman Long [Tue, 17 Jul 2018 21:59:27 +0000 (17:59 -0400)]
x86/kvm: Don't use pvqspinlock code if only 1 vCPU

On a VM with only 1 vCPU, the locking fast path will always be
successful. In this case, there is no need to use the the PV qspinlock
code which has higher overhead on the unlock side than the native
qspinlock code.

Signed-off-by: Waiman Long <longman@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoKVM/MMU: Simplify __kvm_sync_page() function
Tianyu Lan [Wed, 18 Jul 2018 06:12:04 +0000 (06:12 +0000)]
KVM/MMU: Simplify __kvm_sync_page() function

Merge check of "sp->role.cr4_pae != !!is_pae(vcpu))" and "vcpu->
arch.mmu.sync_page(vcpu, sp) == 0". kvm_mmu_prepare_zap_page()
is called under both these conditions.

Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agokvm: x86: Remove CR3_PCID_INVD flag
Junaid Shahid [Wed, 27 Jun 2018 21:59:21 +0000 (14:59 -0700)]
kvm: x86: Remove CR3_PCID_INVD flag

It is a duplicate of X86_CR3_PCID_NOFLUSH. So just use that instead.

Signed-off-by: Junaid Shahid <junaids@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agokvm: x86: Add multi-entry LRU cache for previous CR3s
Junaid Shahid [Wed, 27 Jun 2018 21:59:20 +0000 (14:59 -0700)]
kvm: x86: Add multi-entry LRU cache for previous CR3s

Adds support for storing multiple previous CR3/root_hpa pairs maintained
as an LRU cache, so that the lockless CR3 switch path can be used when
switching back to any of them.

Signed-off-by: Junaid Shahid <junaids@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agokvm: x86: Flush only affected TLB entries in kvm_mmu_invlpg*
Junaid Shahid [Fri, 29 Jun 2018 20:10:05 +0000 (13:10 -0700)]
kvm: x86: Flush only affected TLB entries in kvm_mmu_invlpg*

This needs a minor bug fix. The updated patch is as follows.

Thanks,
Junaid

------------------------------------------------------------------------------

kvm_mmu_invlpg() and kvm_mmu_invpcid_gva() only need to flush the TLB
entries for the specific guest virtual address, instead of flushing all
TLB entries associated with the VM.

Signed-off-by: Junaid Shahid <junaids@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agokvm: x86: Skip shadow page resync on CR3 switch when indicated by guest
Junaid Shahid [Wed, 27 Jun 2018 21:59:18 +0000 (14:59 -0700)]
kvm: x86: Skip shadow page resync on CR3 switch when indicated by guest

When the guest indicates that the TLB doesn't need to be flushed in a
CR3 switch, we can also skip resyncing the shadow page tables since an
out-of-sync shadow page table is equivalent to an out-of-sync TLB.

Signed-off-by: Junaid Shahid <junaids@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agokvm: x86: Support selectively freeing either current or previous MMU root
Junaid Shahid [Wed, 27 Jun 2018 21:59:17 +0000 (14:59 -0700)]
kvm: x86: Support selectively freeing either current or previous MMU root

kvm_mmu_free_roots() now takes a mask specifying which roots to free, so
that either one of the roots (active/previous) can be individually freed
when needed.

Signed-off-by: Junaid Shahid <junaids@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agokvm: x86: Add a root_hpa parameter to kvm_mmu->invlpg()
Junaid Shahid [Wed, 27 Jun 2018 21:59:16 +0000 (14:59 -0700)]
kvm: x86: Add a root_hpa parameter to kvm_mmu->invlpg()

This allows invlpg() to be called using either the active root_hpa
or the prev_root_hpa.

Signed-off-by: Junaid Shahid <junaids@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agokvm: x86: Skip TLB flush on fast CR3 switch when indicated by guest
Junaid Shahid [Wed, 27 Jun 2018 21:59:15 +0000 (14:59 -0700)]
kvm: x86: Skip TLB flush on fast CR3 switch when indicated by guest

When PCIDs are enabled, the MSb of the source operand for a MOV-to-CR3
instruction indicates that the TLB doesn't need to be flushed.

This change enables this optimization for MOV-to-CR3s in the guest
that have been intercepted by KVM for shadow paging and are handled
within the fast CR3 switch path.

Signed-off-by: Junaid Shahid <junaids@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agokvm: vmx: Support INVPCID in shadow paging mode
Junaid Shahid [Wed, 27 Jun 2018 21:59:14 +0000 (14:59 -0700)]
kvm: vmx: Support INVPCID in shadow paging mode

Implement support for INVPCID in shadow paging mode as well.

Signed-off-by: Junaid Shahid <junaids@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agokvm: x86: Propagate guest PCIDs to host PCIDs
Junaid Shahid [Wed, 27 Jun 2018 21:59:13 +0000 (14:59 -0700)]
kvm: x86: Propagate guest PCIDs to host PCIDs

When using shadow paging mode, propagate the guest's PCID value to
the shadow CR3 in the host instead of always using PCID 0.

Signed-off-by: Junaid Shahid <junaids@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agokvm: x86: Add ability to skip TLB flush when switching CR3
Junaid Shahid [Wed, 27 Jun 2018 21:59:12 +0000 (14:59 -0700)]
kvm: x86: Add ability to skip TLB flush when switching CR3

Remove the implicit flush from the set_cr3 handlers, so that the
callers are able to decide whether to flush the TLB or not.

Signed-off-by: Junaid Shahid <junaids@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agokvm: x86: Use fast CR3 switch for nested VMX
Junaid Shahid [Wed, 27 Jun 2018 21:59:11 +0000 (14:59 -0700)]
kvm: x86: Use fast CR3 switch for nested VMX

Use the fast CR3 switch mechanism to locklessly change the MMU root
page when switching between L1 and L2. The switch from L2 to L1 should
always go through the fast path, while the switch from L1 to L2 should
go through the fast path if L1's CR3/EPTP for L2 hasn't changed
since the last time.

Signed-off-by: Junaid Shahid <junaids@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agokvm: x86: Support resetting the MMU context without resetting roots
Junaid Shahid [Wed, 27 Jun 2018 21:59:10 +0000 (14:59 -0700)]
kvm: x86: Support resetting the MMU context without resetting roots

This adds support for re-initializing the MMU context in a different
mode while preserving the active root_hpa and the prev_root.

Signed-off-by: Junaid Shahid <junaids@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agokvm: x86: Add support for fast CR3 switch across different MMU modes
Junaid Shahid [Wed, 27 Jun 2018 21:59:09 +0000 (14:59 -0700)]
kvm: x86: Add support for fast CR3 switch across different MMU modes

This generalizes the lockless CR3 switch path to be able to work
across different MMU modes (e.g. nested vs non-nested) by checking
that the expected page role of the new root page matches the page role
of the previously stored root page in addition to checking that the new
CR3 matches the previous CR3. Furthermore, instead of loading the
hardware CR3 in fast_cr3_switch(), it is now done in vcpu_enter_guest(),
as by that time the MMU context would be up-to-date with the VCPU mode.

Signed-off-by: Junaid Shahid <junaids@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agokvm: x86: Introduce KVM_REQ_LOAD_CR3
Junaid Shahid [Wed, 27 Jun 2018 21:59:08 +0000 (14:59 -0700)]
kvm: x86: Introduce KVM_REQ_LOAD_CR3

The KVM_REQ_LOAD_CR3 request loads the hardware CR3 using the
current root_hpa.

Signed-off-by: Junaid Shahid <junaids@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agokvm: x86: Introduce kvm_mmu_calc_root_page_role()
Junaid Shahid [Wed, 27 Jun 2018 21:59:07 +0000 (14:59 -0700)]
kvm: x86: Introduce kvm_mmu_calc_root_page_role()

These functions factor out the base role calculation from the
corresponding kvm_init_*_mmu() functions. The new functions return
what would be the role assigned to a root page in the current VCPU
state. This can be masked with mmu_base_role_mask to derive the base
role.

Signed-off-by: Junaid Shahid <junaids@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agokvm: x86: Add fast CR3 switch code path
Junaid Shahid [Wed, 27 Jun 2018 21:59:06 +0000 (14:59 -0700)]
kvm: x86: Add fast CR3 switch code path

When using shadow paging, a CR3 switch in the guest results in a VM Exit.
In the common case, that VM exit doesn't require much processing by KVM.
However, it does acquire the MMU lock, which can start showing signs of
contention under some workloads even on a 2 VCPU VM when the guest is
using KPTI. Therefore, we add a fast path that avoids acquiring the MMU
lock in the most common cases e.g. when switching back and forth between
the kernel and user mode CR3s used by KPTI with no guest page table
changes in between.

For now, this fast path is implemented only for 64-bit guests and hosts
to avoid the handling of PDPTEs, but it can be extended later to 32-bit
guests and/or hosts as well.

Signed-off-by: Junaid Shahid <junaids@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agokvm: x86: Avoid taking MMU lock in kvm_mmu_sync_roots if no sync is needed
Junaid Shahid [Wed, 27 Jun 2018 21:59:05 +0000 (14:59 -0700)]
kvm: x86: Avoid taking MMU lock in kvm_mmu_sync_roots if no sync is needed

kvm_mmu_sync_roots() can locklessly check whether a sync is needed and just
bail out if it isn't.

Signed-off-by: Junaid Shahid <junaids@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agokvm: x86: Make sync_page() flush remote TLBs once only
Junaid Shahid [Wed, 27 Jun 2018 21:59:04 +0000 (14:59 -0700)]
kvm: x86: Make sync_page() flush remote TLBs once only

sync_page() calls set_spte() from a loop across a page table. It would
work better if set_spte() left the TLB flushing to its callers, so that
sync_page() can aggregate into a single call.

Signed-off-by: Junaid Shahid <junaids@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoKVM: MMU: drop vcpu param in gpte_access
Peter Xu [Wed, 18 Jul 2018 07:57:50 +0000 (15:57 +0800)]
KVM: MMU: drop vcpu param in gpte_access

It's never used.  Drop it.

Signed-off-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoKVM: nVMX: Separate logic allocating shadow vmcs to a function
Liran Alon [Fri, 22 Jun 2018 23:35:13 +0000 (02:35 +0300)]
KVM: nVMX: Separate logic allocating shadow vmcs to a function

No functionality change.
This is done as a preparation for VMCS shadowing virtualization.

Signed-off-by: Liran Alon <liran.alon@oracle.com>
Signed-off-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoKVM: VMX: Mark vmcs header as shadow in case alloc_vmcs_cpu() allocate shadow vmcs
Liran Alon [Fri, 22 Jun 2018 23:35:12 +0000 (02:35 +0300)]
KVM: VMX: Mark vmcs header as shadow in case alloc_vmcs_cpu() allocate shadow vmcs

No functionality change.

Signed-off-by: Liran Alon <liran.alon@oracle.com>
Signed-off-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoKVM: nVMX: Expose VMCS shadowing to L1 guest
Liran Alon [Fri, 22 Jun 2018 23:35:11 +0000 (02:35 +0300)]
KVM: nVMX: Expose VMCS shadowing to L1 guest

Expose VMCS shadowing to L1 as a VMX capability of the virtual CPU,
whether or not VMCS shadowing is supported by the physical CPU.
(VMCS shadowing emulation)

Shadowed VMREADs and VMWRITEs from L2 are handled by L0, without a
VM-exit to L1.

Signed-off-by: Liran Alon <liran.alon@oracle.com>
Signed-off-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoKVM: nVMX: Do not forward VMREAD/VMWRITE VMExits to L1 if required so by vmcs12 vmrea...
Liran Alon [Fri, 22 Jun 2018 23:35:10 +0000 (02:35 +0300)]
KVM: nVMX: Do not forward VMREAD/VMWRITE VMExits to L1 if required so by vmcs12 vmread/vmwrite bitmaps

This is done as a preparation for VMCS shadowing emulation.

Signed-off-by: Liran Alon <liran.alon@oracle.com>
Signed-off-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoKVM: nVMX: vmread/vmwrite: Use shadow vmcs12 if running L2
Liran Alon [Fri, 22 Jun 2018 23:35:09 +0000 (02:35 +0300)]
KVM: nVMX: vmread/vmwrite: Use shadow vmcs12 if running L2

This is done as a preparation to VMCS shadowing emulation.

Signed-off-by: Liran Alon <liran.alon@oracle.com>
Signed-off-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoKVM: selftests: add tests for shadow VMCS save/restore
Paolo Bonzini [Sat, 28 Jul 2018 22:14:11 +0000 (00:14 +0200)]
KVM: selftests: add tests for shadow VMCS save/restore

This includes setting up the shadow VMCS and the secondary execution
controls in lib/vmx.c.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoKVM: nVMX: include shadow vmcs12 in nested state
Paolo Bonzini [Wed, 18 Jul 2018 17:45:51 +0000 (19:45 +0200)]
KVM: nVMX: include shadow vmcs12 in nested state

The shadow vmcs12 cannot be flushed on KVM_GET_NESTED_STATE,
because at that point guest memory is assumed by userspace to
be immutable.  Capture the cache in vmx_get_nested_state, adding
another page at the end if there is an active shadow vmcs12.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoKVM: nVMX: Cache shadow vmcs12 on VMEntry and flush to memory on VMExit
Liran Alon [Fri, 22 Jun 2018 23:35:08 +0000 (02:35 +0300)]
KVM: nVMX: Cache shadow vmcs12 on VMEntry and flush to memory on VMExit

This is done is done as a preparation to VMCS shadowing emulation.

Signed-off-by: Liran Alon <liran.alon@oracle.com>
Signed-off-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoKVM: nVMX: Verify VMCS shadowing VMCS link pointer
Liran Alon [Fri, 22 Jun 2018 23:35:07 +0000 (02:35 +0300)]
KVM: nVMX: Verify VMCS shadowing VMCS link pointer

Intel SDM considers these checks to be part of
"Checks on Guest Non-Register State".

Note that it is legal for vmcs->vmcs_link_pointer to be -1ull
when VMCS shadowing is enabled. In this case, any VMREAD/VMWRITE to
shadowed-field sets the ALU flags for VMfailInvalid (i.e. CF=1).

Signed-off-by: Liran Alon <liran.alon@oracle.com>
Signed-off-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoKVM: nVMX: Verify VMCS shadowing controls
Liran Alon [Fri, 22 Jun 2018 23:35:06 +0000 (02:35 +0300)]
KVM: nVMX: Verify VMCS shadowing controls

Signed-off-by: Liran Alon <liran.alon@oracle.com>
Signed-off-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoKVM: nVMX: Introduce nested_cpu_has_shadow_vmcs()
Liran Alon [Fri, 22 Jun 2018 23:35:05 +0000 (02:35 +0300)]
KVM: nVMX: Introduce nested_cpu_has_shadow_vmcs()

Signed-off-by: Liran Alon <liran.alon@oracle.com>
Signed-off-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoKVM: nVMX: Fail VMLAUNCH and VMRESUME on shadow VMCS
Liran Alon [Fri, 22 Jun 2018 23:35:04 +0000 (02:35 +0300)]
KVM: nVMX: Fail VMLAUNCH and VMRESUME on shadow VMCS

Signed-off-by: Liran Alon <liran.alon@oracle.com>
Signed-off-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoKVM: nVMX: Allow VMPTRLD for shadow VMCS if vCPU supports VMCS shadowing
Liran Alon [Wed, 18 Jul 2018 12:07:59 +0000 (14:07 +0200)]
KVM: nVMX: Allow VMPTRLD for shadow VMCS if vCPU supports VMCS shadowing

Signed-off-by: Liran Alon <liran.alon@oracle.com>
Signed-off-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoKVM: VMX: Change vmcs12_{read,write}_any() to receive vmcs12 as parameter
Liran Alon [Fri, 22 Jun 2018 23:35:02 +0000 (02:35 +0300)]
KVM: VMX: Change vmcs12_{read,write}_any() to receive vmcs12 as parameter

No functionality change.
This is done as a preparation for VMCS shadowing emulation.

Signed-off-by: Liran Alon <liran.alon@oracle.com>
Signed-off-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoKVM: VMX: Create struct for VMCS header
Liran Alon [Fri, 22 Jun 2018 23:35:01 +0000 (02:35 +0300)]
KVM: VMX: Create struct for VMCS header

No functionality change.

Signed-off-by: Liran Alon <liran.alon@oracle.com>
Signed-off-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agokvm: selftests: add test for nested state save/restore
Paolo Bonzini [Sat, 28 Jul 2018 19:56:09 +0000 (21:56 +0200)]
kvm: selftests: add test for nested state save/restore

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agokvm: nVMX: Introduce KVM_CAP_NESTED_STATE
Jim Mattson [Tue, 10 Jul 2018 09:27:20 +0000 (11:27 +0200)]
kvm: nVMX: Introduce KVM_CAP_NESTED_STATE

For nested virtualization L0 KVM is managing a bit of state for L2 guests,
this state can not be captured through the currently available IOCTLs. In
fact the state captured through all of these IOCTLs is usually a mix of L1
and L2 state. It is also dependent on whether the L2 guest was running at
the moment when the process was interrupted to save its state.

With this capability, there are two new vcpu ioctls: KVM_GET_NESTED_STATE
and KVM_SET_NESTED_STATE. These can be used for saving and restoring a VM
that is in VMX operation.

Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: x86@kernel.org
Cc: kvm@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Jim Mattson <jmattson@google.com>
[karahmed@ - rename structs and functions and make them ready for AMD and
             address previous comments.
           - handle nested.smm state.
           - rebase & a bit of refactoring.
           - Merge 7/8 and 8/8 into one patch. ]
Signed-off-by: KarimAllah Ahmed <karahmed@amazon.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoKVM: x86: do not load vmcs12 pages while still in SMM
Paolo Bonzini [Wed, 18 Jul 2018 16:49:01 +0000 (18:49 +0200)]
KVM: x86: do not load vmcs12 pages while still in SMM

If the vCPU enters system management mode while running a nested guest,
RSM starts processing the vmentry while still in SMM.  In that case,
however, the pages pointed to by the vmcs12 might be incorrectly
loaded from SMRAM.  To avoid this, delay the handling of the pages
until just before the next vmentry.  This is done with a new request
and a new entry in kvm_x86_ops, which we will be able to reuse for
nested VMX state migration.

Extracted from a patch by Jim Mattson and KarimAllah Ahmed.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agokvm: selftests: add basic test for state save and restore
Paolo Bonzini [Thu, 26 Jul 2018 11:19:23 +0000 (13:19 +0200)]
kvm: selftests: add basic test for state save and restore

The test calls KVM_RUN repeatedly, and creates an entirely new VM with the
old memory and vCPU state on every exit to userspace.  The kvm_util API is
expanded with two functions that manage the lifetime of a kvm_vm struct:
the first closes the file descriptors and leaves the memory allocated,
and the second opens the file descriptors and reuses the memory from
the previous incarnation of the kvm_vm struct.

For now the test is very basic, as it does not test for example XSAVE or
vCPU events.  However, it will test nested virtualization state starting
with the next patch.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agokvm: selftests: ensure vcpu file is released
Paolo Bonzini [Thu, 26 Jul 2018 11:02:24 +0000 (13:02 +0200)]
kvm: selftests: ensure vcpu file is released

The selftests were not munmap-ing the kvm_run area from the vcpu file descriptor.
The result was that kvm_vcpu_release was not called and a reference was left in the
parent "struct kvm".  Ultimately this was visible in the upcoming state save/restore
test as an error when KVM attempted to create a duplicate debugfs entry.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agokvm: selftests: actually use all of lib/vmx.c
Paolo Bonzini [Sat, 28 Jul 2018 16:45:38 +0000 (18:45 +0200)]
kvm: selftests: actually use all of lib/vmx.c

The allocation of the VMXON and VMCS is currently done twice, in
lib/vmx.c and in vmx_tsc_adjust_test.c.  Reorganize the code to
provide a cleaner and easier to use API to the tests.  lib/vmx.c
now does the complete setup of the VMX data structures, but does not
create the VM or set CPUID.  This has to be done by the caller.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agokvm: selftests: create a GDT and TSS
Paolo Bonzini [Sat, 28 Jul 2018 16:09:44 +0000 (18:09 +0200)]
kvm: selftests: create a GDT and TSS

The GDT and the TSS base were left to zero, and this has interesting effects
when the TSS descriptor is later read to set up a VMCS's TR_BASE.  Basically
it worked by chance, and this patch fixes it by setting up all the protected
mode data structures properly.

Because the GDT and TSS addresses are virtual, the page tables now always
exist at the time of vcpu setup.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoKVM: x86: ensure all MSRs can always be KVM_GET/SET_MSR'd
Paolo Bonzini [Thu, 26 Jul 2018 11:01:52 +0000 (13:01 +0200)]
KVM: x86: ensure all MSRs can always be KVM_GET/SET_MSR'd

Some of the MSRs returned by GET_MSR_INDEX_LIST currently cannot be sent back
to KVM_GET_MSR and/or KVM_SET_MSR; either they can never be sent back, or you
they are only accepted under special conditions.  This makes the API a pain to
use.

To avoid this pain, this patch makes it so that the result of the get-list
ioctl can always be used for host-initiated get and set.  Since we don't have
a separate way to check for read-only MSRs, this means some Hyper-V MSRs are
ignored when written.  Arguably they should not even be in the result of
GET_MSR_INDEX_LIST, but I am leaving there in case userspace is using the
outcome of GET_MSR_INDEX_LIST to derive the support for the corresponding
Hyper-V feature.

Cc: stable@vger.kernel.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoKVM: vmx: remove save/restore of host BNDCGFS MSR
Sean Christopherson [Wed, 11 Jul 2018 16:54:30 +0000 (09:54 -0700)]
KVM: vmx: remove save/restore of host BNDCGFS MSR

Linux does not support Memory Protection Extensions (MPX) in the
kernel itself, thus the BNDCFGS (Bound Config Supervisor) MSR will
always be zero in the KVM host, i.e. RDMSR in vmx_save_host_state()
is superfluous.  KVM unconditionally sets VM_EXIT_CLEAR_BNDCFGS,
i.e. BNDCFGS will always be zero after VMEXIT, thus manually loading
BNDCFGS is also superfluous.

And in the event the MPX kernel support is added (unlikely given
that MPX for userspace is in its death throes[1]), BNDCFGS will
likely be common across all CPUs[2], and at the least shouldn't
change on a regular basis, i.e. saving the MSR on every VMENTRY is
completely unnecessary.

WARN_ONCE in hardware_setup() if the host's BNDCFGS is non-zero to
document that KVM does not preserve BNDCFGS and to serve as a hint
as to how BNDCFGS likely should be handled if MPX is used in the
kernel, e.g. BNDCFGS should be saved once during KVM setup.

[1] https://lkml.org/lkml/2018/4/27/1046
[2] http://www.openwall.com/lists/kernel-hardening/2017/07/24/28

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoKVM: Switch 'requests' to be 64-bit (explicitly)
KarimAllah Ahmed [Tue, 10 Jul 2018 09:27:19 +0000 (11:27 +0200)]
KVM: Switch 'requests' to be 64-bit (explicitly)

Switch 'requests' to be explicitly 64-bit and update BUILD_BUG_ON check to
use the size of "requests" instead of the hard-coded '32'.

That gives us a bit more room again for arch-specific requests as we
already ran out of space for x86 due to the hard-coded check.

The only exception here is ARM32 as it is still 32-bits.

Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim KrÄmář <rkrcmar@redhat.com>
Cc: kvm@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Reviewed-by: Jim Mattson <jmattson@google.com>
Signed-off-by: KarimAllah Ahmed <karahmed@amazon.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agokvm: selftests: add cr4_cpuid_sync_test
Wei Huang [Tue, 26 Jun 2018 03:41:57 +0000 (23:41 -0400)]
kvm: selftests: add cr4_cpuid_sync_test

KVM is supposed to update some guest VM's CPUID bits (e.g. OSXSAVE) when
CR4 is changed. A bug was found in KVM recently and it was fixed by
Commit c4d2188206ba ("KVM: x86: Update cpuid properly when CR4.OSXAVE or
CR4.PKE is changed"). This patch adds a test to verify the synchronization
between guest VM's CR4 and CPUID bits.

Signed-off-by: Wei Huang <wei@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6 years agoMerge tag 'v4.18-rc6' into HEAD
Paolo Bonzini [Mon, 6 Aug 2018 15:31:36 +0000 (17:31 +0200)]
Merge tag 'v4.18-rc6' into HEAD

Pull bug fixes into the KVM development tree to avoid nasty conflicts.

6 years agoMerge tag 'kvm-s390-next-4.19-1' of git://git.kernel.org/pub/scm/linux/kernel/git...
Paolo Bonzini [Thu, 2 Aug 2018 11:57:29 +0000 (13:57 +0200)]
Merge tag 'kvm-s390-next-4.19-1' of git://git./linux/kernel/git/kvms390/linux into HEAD

KVM: s390: Features for 4.19

- initial version for host large page support. Must be enabled with
  module parameter hpage=1 and will conflict with the nested=1
  parameter.
- enable etoken facility for guests
- Fixes

6 years agoMerge tag 'kvm-ppc-next-4.19-1' of git://git.kernel.org/pub/scm/linux/kernel/git...
Paolo Bonzini [Thu, 2 Aug 2018 11:57:26 +0000 (13:57 +0200)]
Merge tag 'kvm-ppc-next-4.19-1' of git://git./linux/kernel/git/paulus/powerpc into HEAD

PPC KVM update for 4.19.

This update adds no new features; it just has some minor code cleanups
and bug fixes, including a fix to allow us to create KVM_MAX_VCPUS
vCPUs on POWER9 in all CPU threading modes.

6 years agoMerge tag 'hlp_stage1' of git://git.kernel.org/pub/scm/linux/kernel/git/kvms390/linux...
Janosch Frank [Mon, 30 Jul 2018 21:20:00 +0000 (23:20 +0200)]
Merge tag 'hlp_stage1' of git://git./linux/kernel/git/kvms390/linux into kvms390/next

KVM: s390: initial host large page support

- must be enabled via module parameter hpage=1
- cannot be used together with nested
- does support migration
- does support hugetlbfs
- no THP yet

6 years agoKVM: s390: Add huge page enablement control
Janosch Frank [Fri, 13 Jul 2018 10:28:31 +0000 (11:28 +0100)]
KVM: s390: Add huge page enablement control

General KVM huge page support on s390 has to be enabled via the
kvm.hpage module parameter. Either nested or hpage can be enabled, as
we currently do not support vSIE for huge backed guests. Once the vSIE
support is added we will either drop the parameter or enable it as
default.

For a guest the feature has to be enabled through the new
KVM_CAP_S390_HPAGE_1M capability and the hpage module
parameter. Enabling it means that cmm can't be enabled for the vm and
disables pfmf and storage key interpretation.

This is due to the fact that in some cases, in upcoming patches, we
have to split huge pages in the guest mapping to be able to set more
granular memory protection on 4k pages. These split pages have fake
page tables that are not visible to the Linux memory management which
subsequently will not manage its PGSTEs, while the SIE will. Disabling
these features lets us manage PGSTE data in a consistent matter and
solve that problem.

Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
6 years agos390/mm: Add huge page gmap linking support
Janosch Frank [Fri, 13 Jul 2018 10:28:37 +0000 (11:28 +0100)]
s390/mm: Add huge page gmap linking support

Let's allow huge pmd linking when enabled through the
KVM_CAP_S390_HPAGE_1M capability. Also we can now restrict gmap
invalidation and notification to the cases where the capability has
been activated and save some cycles when that's not the case.

Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
6 years agos390/mm: hugetlb pages within a gmap can not be freed
Dominik Dingel [Fri, 13 Jul 2018 10:28:29 +0000 (11:28 +0100)]
s390/mm: hugetlb pages within a gmap can not be freed

Guests backed by huge pages could theoretically free unused pages via
the diagnose 10 instruction. We currently don't allow that, so we
don't have to refault it once it's needed again.

Signed-off-by: Dominik Dingel <dingel@linux.vnet.ibm.com>
Reviewed-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
6 years agoKVM: s390: Beautify skey enable check
Janosch Frank [Fri, 20 Jul 2018 12:51:21 +0000 (13:51 +0100)]
KVM: s390: Beautify skey enable check

Let's introduce an explicit check if skeys have already been enabled
for the vcpu, so we don't have to check the mm context if we don't have
the storage key facility.

This lets us check for enablement without having to take the mm
semaphore and thus speedup skey emulation.

Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Acked-by: Farhan Ali <alifm@linux.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
6 years agoKVM: s390: Add skey emulation fault handling
Janosch Frank [Wed, 18 Jul 2018 12:40:22 +0000 (13:40 +0100)]
KVM: s390: Add skey emulation fault handling

When doing skey emulation for huge guests, we now need to fault in
pmds, as we don't have PGSTES anymore to store them when we do not
have valid table entries.

Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
6 years agos390/mm: Add huge pmd storage key handling
Janosch Frank [Fri, 13 Jul 2018 10:28:28 +0000 (11:28 +0100)]
s390/mm: Add huge pmd storage key handling

Storage keys for guests with huge page mappings have to be managed in
hardware. There are no PGSTEs for PMDs that we could use to retain the
guests's logical view of the key.

Signed-off-by: Janosch Frank <frankja@linux.vnet.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
6 years agos390/mm: Clear skeys for newly mapped huge guest pmds
Janosch Frank [Fri, 13 Jul 2018 10:28:26 +0000 (11:28 +0100)]
s390/mm: Clear skeys for newly mapped huge guest pmds

Similarly to the pte skey handling, where we set the storage key to
the default key for each newly mapped pte, we have to also do that for
huge pmds.

With the PG_arch_1 flag we keep track if the area has already been
cleared of its skeys.

Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
Reviewed-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
6 years agos390/mm: Clear huge page storage keys on enable_skey
Dominik Dingel [Fri, 13 Jul 2018 10:28:25 +0000 (11:28 +0100)]
s390/mm: Clear huge page storage keys on enable_skey

When a guest starts using storage keys, we trap and set a default one
for its whole valid address space. With this patch we are now able to
do that for large pages.

To speed up the storage key insertion, we use
__storage_key_init_range, which in-turn will use sske_frame to set
multiple storage keys with one instruction. As it has been previously
used for debuging we have to get rid of the default key check and make
it quiescing.

Signed-off-by: Dominik Dingel <dingel@linux.vnet.ibm.com>
Signed-off-by: Janosch Frank <frankja@linux.vnet.ibm.com>
[replaced page_set_storage_key loop with __storage_key_init_range]
Reviewed-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
6 years agos390/mm: Add huge page dirty sync support
Janosch Frank [Tue, 17 Jul 2018 12:21:22 +0000 (13:21 +0100)]
s390/mm: Add huge page dirty sync support

To do dirty loging with huge pages, we protect huge pmds in the
gmap. When they are written to, we unprotect them and mark them dirty.

We introduce the function gmap_test_and_clear_dirty_pmd which handles
dirty sync for huge pages.

Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
Acked-by: David Hildenbrand <david@redhat.com>
6 years agos390/mm: Add gmap pmd invalidation and clearing
Janosch Frank [Fri, 13 Jul 2018 10:28:22 +0000 (11:28 +0100)]
s390/mm: Add gmap pmd invalidation and clearing

If the host invalidates a pmd, we also have to invalidate the
corresponding gmap pmds, as well as flush them from the TLB. This is
necessary, as we don't share the pmd tables between host and guest as
we do with ptes.

The clearing part of these three new functions sets a guest pmd entry
to _SEGMENT_ENTRY_EMPTY, so the guest will fault on it and we will
re-link it.

Flushing the gmap is not necessary in the host's lazy local and csp
cases. Both purge the TLB completely.

Signed-off-by: Janosch Frank <frankja@linux.vnet.ibm.com>
Reviewed-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Acked-by: David Hildenbrand <david@redhat.com>
6 years agos390/mm: Add gmap pmd notification bit setting
Janosch Frank [Fri, 13 Jul 2018 10:28:21 +0000 (11:28 +0100)]
s390/mm: Add gmap pmd notification bit setting

Like for ptes, we also need invalidation notification for pmds, to
make sure the guest lowcore pages are always accessible and later
addition of shadowed pmds.

With PMDs we do not have PGSTEs or some other bits we could use in the
host PMD. Instead we pick one of the free bits in the gmap PMD. Every
time a host pmd will be invalidated, we will check if the respective
gmap PMD has the bit set and in that case fire up the notifier.

Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
6 years agos390/mm: Add gmap pmd linking
Janosch Frank [Fri, 13 Jul 2018 10:28:20 +0000 (11:28 +0100)]
s390/mm: Add gmap pmd linking

Let's allow pmds to be linked into gmap for the upcoming s390 KVM huge
page support.

Before this patch we copied the full userspace pmd entry. This is not
correct, as it contains SW defined bits that might be interpreted
differently in the GMAP context. Now we only copy over all hardware
relevant information leaving out the software bits.

Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
6 years agos390/mm: Abstract gmap notify bit setting
Janosch Frank [Fri, 13 Jul 2018 10:28:18 +0000 (11:28 +0100)]
s390/mm: Abstract gmap notify bit setting

Currently we use the software PGSTE bits PGSTE_IN_BIT and
PGSTE_VSIE_BIT to notify before an invalidation occurs on a prefix
page or a VSIE page respectively. Both bits are pgste specific, but
are used when protecting a memory range.

Let's introduce abstract GMAP_NOTIFY_* bits that will be realized into
the respective bits when gmap DAT table entries are protected.

Signed-off-by: Janosch Frank <frankja@linux.vnet.ibm.com>
Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
6 years agos390/mm: Make gmap_protect_range more modular
Janosch Frank [Fri, 13 Jul 2018 10:28:16 +0000 (11:28 +0100)]
s390/mm: Make gmap_protect_range more modular

This patch reworks the gmap_protect_range logic and extracts the pte
handling into an own function. Also we do now walk to the pmd and make
it accessible in the function for later use. This way we can add huge
page handling logic more easily.

Signed-off-by: Janosch Frank <frankja@linux.vnet.ibm.com>
Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
6 years agoKVM: PPC: Book3S HV: Read kvm->arch.emul_smt_mode under kvm->lock
Paul Mackerras [Thu, 26 Jul 2018 05:38:41 +0000 (15:38 +1000)]
KVM: PPC: Book3S HV: Read kvm->arch.emul_smt_mode under kvm->lock

Commit 1e175d2 ("KVM: PPC: Book3S HV: Pack VCORE IDs to access full
VCPU ID space", 2018-07-25) added code that uses kvm->arch.emul_smt_mode
before any VCPUs are created.  However, userspace can change
kvm->arch.emul_smt_mode at any time up until the first VCPU is created.
Hence it is (theoretically) possible for the check in
kvmppc_core_vcpu_create_hv() to race with another userspace thread
changing kvm->arch.emul_smt_mode.

This fixes it by moving the test that uses kvm->arch.emul_smt_mode into
the block where kvm->lock is held.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
6 years agoKVM: PPC: Book3S HV: Allow creating max number of VCPUs on POWER9
Paul Mackerras [Thu, 26 Jul 2018 04:53:54 +0000 (14:53 +1000)]
KVM: PPC: Book3S HV: Allow creating max number of VCPUs on POWER9

Commit 1e175d2 ("KVM: PPC: Book3S HV: Pack VCORE IDs to access full
VCPU ID space", 2018-07-25) allowed use of VCPU IDs up to
KVM_MAX_VCPU_ID on POWER9 in all guest SMT modes and guest emulated
hardware SMT modes.  However, with the current definition of
KVM_MAX_VCPU_ID, a guest SMT mode of 1 and an emulated SMT mode of 8,
it is only possible to create KVM_MAX_VCPUS / 2 VCPUS, because
threads_per_subcore is 4 on POWER9 CPUs.  (Using an emulated SMT mode
of 8 is useful when migrating VMs to or from POWER8 hosts.)

This increases KVM_MAX_VCPU_ID to 8 * KVM_MAX_VCPUS when HV KVM is
configured in, so that a full complement of KVM_MAX_VCPUS VCPUs can
be created on POWER9 in all guest SMT modes and emulated hardware
SMT modes.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
6 years agoKVM: PPC: Book3S HV: Pack VCORE IDs to access full VCPU ID space
Sam Bobroff [Wed, 25 Jul 2018 06:12:02 +0000 (16:12 +1000)]
KVM: PPC: Book3S HV: Pack VCORE IDs to access full VCPU ID space

It is not currently possible to create the full number of possible
VCPUs (KVM_MAX_VCPUS) on Power9 with KVM-HV when the guest uses fewer
threads per core than its core stride (or "VSMT mode"). This is
because the VCORE ID and XIVE offsets grow beyond KVM_MAX_VCPUS
even though the VCPU ID is less than KVM_MAX_VCPU_ID.

To address this, "pack" the VCORE ID and XIVE offsets by using
knowledge of the way the VCPU IDs will be used when there are fewer
guest threads per core than the core stride. The primary thread of
each core will always be used first. Then, if the guest uses more than
one thread per core, these secondary threads will sequentially follow
the primary in each core.

So, the only way an ID above KVM_MAX_VCPUS can be seen, is if the
VCPUs are being spaced apart, so at least half of each core is empty,
and IDs between KVM_MAX_VCPUS and (KVM_MAX_VCPUS * 2) can be mapped
into the second half of each core (4..7, in an 8-thread core).

Similarly, if IDs above KVM_MAX_VCPUS * 2 are seen, at least 3/4 of
each core is being left empty, and we can map down into the second and
third quarters of each core (2, 3 and 5, 6 in an 8-thread core).

Lastly, if IDs above KVM_MAX_VCPUS * 4 are seen, only the primary
threads are being used and 7/8 of the core is empty, allowing use of
the 1, 5, 3 and 7 thread slots.

(Strides less than 8 are handled similarly.)

This allows the VCORE ID or offset to be calculated quickly from the
VCPU ID or XIVE server numbers, without access to the VCPU structure.

[paulus@ozlabs.org - tidied up comment a little, changed some WARN_ONCE
 to pr_devel, wrapped line, fixed id check.]

Signed-off-by: Sam Bobroff <sam.bobroff@au1.ibm.com>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
6 years agoLinux 4.18-rc6
Linus Torvalds [Sun, 22 Jul 2018 21:12:20 +0000 (14:12 -0700)]
Linux 4.18-rc6

6 years agoMerge tag 'nvme-for-4.18' of git://git.infradead.org/nvme
Linus Torvalds [Sun, 22 Jul 2018 20:21:45 +0000 (13:21 -0700)]
Merge tag 'nvme-for-4.18' of git://git.infradead.org/nvme

Pull NVMe fixes from Christoph Hellwig:

 - fix a regression in 4.18 that causes a memory leak on probe failure
   (Keith Bush)

 - fix a deadlock in the passthrough ioctl code (Scott Bauer)

 - don't enable AENs if not supported (Weiping Zhang)

 - fix an old regression in metadata handling in the passthrough ioctl
   code (Roland Dreier)

* tag 'nvme-for-4.18' of git://git.infradead.org/nvme:
  nvme: fix handling of metadata_len for NVME_IOCTL_IO_CMD
  nvme: don't enable AEN if not supported
  nvme: ensure forward progress during Admin passthru
  nvme-pci: fix memory leak on probe failure

6 years agoMerge branch 'fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs
Linus Torvalds [Sun, 22 Jul 2018 19:04:51 +0000 (12:04 -0700)]
Merge branch 'fixes' of git://git./linux/kernel/git/viro/vfs

Pull vfs fixes from Al Viro:
 "Fix several places that screw up cleanups after failures halfway
  through opening a file (one open-coding filp_clone_open() and getting
  it wrong, two misusing alloc_file()). That part is -stable fodder from
  the 'work.open' branch.

  And Christoph's regression fix for uapi breakage in aio series;
  include/uapi/linux/aio_abi.h shouldn't be pulling in the kernel
  definition of sigset_t, the reason for doing so in the first place had
  been bogus - there's no need to expose struct __aio_sigset in
  aio_abi.h at all"

* 'fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs:
  aio: don't expose __aio_sigset in uapi
  ocxlflash_getfile(): fix double-iput() on alloc_file() failures
  cxl_getfile(): fix double-iput() on alloc_file() failures
  drm_mode_create_lease_ioctl(): fix open-coded filp_clone_open()

6 years agoalpha: fix osf_wait4() breakage
Al Viro [Sun, 22 Jul 2018 14:07:11 +0000 (15:07 +0100)]
alpha: fix osf_wait4() breakage

kernel_wait4() expects a userland address for status - it's only
rusage that goes as a kernel one (and needs a copyout afterwards)

[ Also, fix the prototype of kernel_wait4() to have that __user
  annotation   - Linus ]

Fixes: 92ebce5ac55d ("osf_wait4: switch to kernel_wait4()")
Cc: stable@kernel.org # v4.13+
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
6 years agoMerge tag 'armsoc-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc
Linus Torvalds [Sun, 22 Jul 2018 00:27:42 +0000 (17:27 -0700)]
Merge tag 'armsoc-fixes' of git://git./linux/kernel/git/arm/arm-soc

Pull ARM SoC fixes from Olof Johansson:

 - Fix interrupt type on ethernet switch for i.MX-based RDU2

 - GPC on i.MX exposed too large a register window which resulted in
   userspace being able to crash the machine.

 - Fixup of bad merge resolution moving GPIO DT nodes under pinctrl on
   droid4.

* tag 'armsoc-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc:
  ARM: dts: imx6: RDU2: fix irq type for mv88e6xxx switch
  soc: imx: gpc: restrict register range for regmap access
  ARM: dts: omap4-droid4: fix dts w.r.t. pwm

6 years agoMerge branch 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel...
Linus Torvalds [Sun, 22 Jul 2018 00:25:49 +0000 (17:25 -0700)]
Merge branch 'x86-urgent-for-linus' of git://git./linux/kernel/git/tip/tip

Pull x86 fix from Ingo Molnar:
 "A single fix for a MCE-polling regression, which prevented the
  disabling of polling"

* 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  x86/MCE: Remove min interval polling limitation

6 years agoMerge branch 'x86-pti-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel...
Linus Torvalds [Sun, 22 Jul 2018 00:23:58 +0000 (17:23 -0700)]
Merge branch 'x86-pti-urgent-for-linus' of git://git./linux/kernel/git/tip/tip

Pull x86 pti fixes from Ingo Molnar:
 "An APM fix, and a BTS hardware-tracing fix related to PTI changes"

* 'x86-pti-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  x86/apm: Don't access __preempt_count with zeroed fs
  x86/events/intel/ds: Fix bts_interrupt_threshold alignment

6 years agoMerge branch 'sched-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel...
Linus Torvalds [Sun, 22 Jul 2018 00:21:34 +0000 (17:21 -0700)]
Merge branch 'sched-urgent-for-linus' of git://git./linux/kernel/git/tip/tip

Pull scheduler fixes from Ingo Molnar:
 "Two fixes: a stop-machine preemption fix and a SCHED_DEADLINE fix"

* 'sched-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  sched/deadline: Fix switched_from_dl() warning
  stop_machine: Disable preemption when waking two stopper threads

6 years agoMerge branch 'core-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel...
Linus Torvalds [Sat, 21 Jul 2018 23:52:08 +0000 (16:52 -0700)]
Merge branch 'core-urgent-for-linus' of git://git./linux/kernel/git/tip/tip

Pull core kernel fixes from Ingo Molnar:
 "This is mostly the copy_to_user_mcsafe() related fixes from Dan
  Williams, and an ORC fix for Clang"

* 'core-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  x86/asm/memcpy_mcsafe: Fix copy_to_user_mcsafe() exception handling
  lib/iov_iter: Fix pipe handling in _copy_to_iter_mcsafe()
  lib/iov_iter: Document _copy_to_iter_flushcache()
  lib/iov_iter: Document _copy_to_iter_mcsafe()
  objtool: Use '.strtab' if '.shstrtab' doesn't exist, to support ORC tables on Clang

6 years agoMerge tag 'powerpc-4.18-4' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc...
Linus Torvalds [Sat, 21 Jul 2018 23:46:53 +0000 (16:46 -0700)]
Merge tag 'powerpc-4.18-4' of git://git./linux/kernel/git/powerpc/linux

Pull powerpc fixes from Michael Ellerman:
 "Two regression fixes, one for xmon disassembly formatting and the
  other to fix the E500 build.

  Two commits to fix a potential security issue in the VFIO code under
  obscure circumstances.

  And finally a fix to the Power9 idle code to restore SPRG3, which is
  user visible and used for sched_getcpu().

  Thanks to: Alexey Kardashevskiy, David Gibson. Gautham R. Shenoy,
  James Clarke"

* tag 'powerpc-4.18-4' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux:
  powerpc/powernv: Fix save/restore of SPRG3 on entry/exit from stop (idle)
  powerpc/Makefile: Assemble with -me500 when building for E500
  KVM: PPC: Check if IOMMU page is contained in the pinned physical page
  vfio/spapr: Use IOMMU pageshift rather than pagesize
  powerpc/xmon: Fix disassembly since printf changes

6 years agoMerge tag 'for-4.18-rc5-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/kdave...
Linus Torvalds [Sat, 21 Jul 2018 23:42:03 +0000 (16:42 -0700)]
Merge tag 'for-4.18-rc5-tag' of git://git./linux/kernel/git/kdave/linux

Pull btrfs fix from David Sterba:
 "A fix of a corruption regarding fsync and clone, under some very
  specific conditions explained in the patch.

  The fix is marked for stable 3.16+ so I'd like to get it merged now
  given the impact"

* tag 'for-4.18-rc5-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/kdave/linux:
  Btrfs: fix file data corruption after cloning a range and fsync

6 years agomm: make vm_area_alloc() initialize core fields
Linus Torvalds [Sat, 21 Jul 2018 22:24:03 +0000 (15:24 -0700)]
mm: make vm_area_alloc() initialize core fields

Like vm_area_dup(), it initializes the anon_vma_chain head, and the
basic mm pointer.

The rest of the fields end up being different for different users,
although the plan is to also initialize the 'vm_ops' field to a dummy
entry.

Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
6 years agomm: make vm_area_dup() actually copy the old vma data
Linus Torvalds [Sat, 21 Jul 2018 21:48:45 +0000 (14:48 -0700)]
mm: make vm_area_dup() actually copy the old vma data

.. and re-initialize th eanon_vma_chain head.

This removes some boiler-plate from the users, and also makes it clear
why it didn't need use the 'zalloc()' version.

Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
6 years agomm: use helper functions for allocating and freeing vm_area structs
Linus Torvalds [Sat, 21 Jul 2018 20:48:51 +0000 (13:48 -0700)]
mm: use helper functions for allocating and freeing vm_area structs

The vm_area_struct is one of the most fundamental memory management
objects, but the management of it is entirely open-coded evertwhere,
ranging from allocation and freeing (using kmem_cache_[z]alloc and
kmem_cache_free) to initializing all the fields.

We want to unify this in order to end up having some unified
initialization of the vmas, and the first step to this is to at least
have basic allocation functions.

Right now those functions are literally just wrappers around the
kmem_cache_*() calls.  This is a purely mechanical conversion:

    # new vma:
    kmem_cache_zalloc(vm_area_cachep, GFP_KERNEL) -> vm_area_alloc()

    # copy old vma
    kmem_cache_alloc(vm_area_cachep, GFP_KERNEL) -> vm_area_dup(old)

    # free vma
    kmem_cache_free(vm_area_cachep, vma) -> vm_area_free(vma)

to the point where the old vma passed in to the vm_area_dup() function
isn't even used yet (because I've left all the old manual initialization
alone).

Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
6 years agoMerge branch 'akpm' (patches from Andrew)
Linus Torvalds [Sat, 21 Jul 2018 20:14:17 +0000 (13:14 -0700)]
Merge branch 'akpm' (patches from Andrew)

Merge fixes from Andrew Morton:
 "5 fixes"

* emailed patches from Andrew Morton <akpm@linux-foundation.org>:
  mm: memcg: fix use after free in mem_cgroup_iter()
  mm/huge_memory.c: fix data loss when splitting a file pmd
  fat: fix memory allocation failure handling of match_strdup()
  MAINTAINERS: Peter has moved
  mm/memblock: add missing include <linux/bootmem.h>

6 years agomm: memcg: fix use after free in mem_cgroup_iter()
Jing Xia [Sat, 21 Jul 2018 00:53:48 +0000 (17:53 -0700)]
mm: memcg: fix use after free in mem_cgroup_iter()

It was reported that a kernel crash happened in mem_cgroup_iter(), which
can be triggered if the legacy cgroup-v1 non-hierarchical mode is used.

Unable to handle kernel paging request at virtual address 6b6b6b6b6b6b8f
......
Call trace:
  mem_cgroup_iter+0x2e0/0x6d4
  shrink_zone+0x8c/0x324
  balance_pgdat+0x450/0x640
  kswapd+0x130/0x4b8
  kthread+0xe8/0xfc
  ret_from_fork+0x10/0x20

  mem_cgroup_iter():
      ......
      if (css_tryget(css))    <-- crash here
    break;
      ......

The crashing reason is that mem_cgroup_iter() uses the memcg object whose
pointer is stored in iter->position, which has been freed before and
filled with POISON_FREE(0x6b).

And the root cause of the use-after-free issue is that
invalidate_reclaim_iterators() fails to reset the value of iter->position
to NULL when the css of the memcg is released in non- hierarchical mode.

Link: http://lkml.kernel.org/r/1531994807-25639-1-git-send-email-jing.xia@unisoc.com
Fixes: 6df38689e0e9 ("mm: memcontrol: fix possible memcg leak due to interrupted reclaim")
Signed-off-by: Jing Xia <jing.xia.mail@gmail.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
Cc: <chunyan.zhang@unisoc.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
6 years agomm/huge_memory.c: fix data loss when splitting a file pmd
Hugh Dickins [Sat, 21 Jul 2018 00:53:45 +0000 (17:53 -0700)]
mm/huge_memory.c: fix data loss when splitting a file pmd

__split_huge_pmd_locked() must check if the cleared huge pmd was dirty,
and propagate that to PageDirty: otherwise, data may be lost when a huge
tmpfs page is modified then split then reclaimed.

How has this taken so long to be noticed?  Because there was no problem
when the huge page is written by a write system call (shmem_write_end()
calls set_page_dirty()), nor when the page is allocated for a write fault
(fault_dirty_shared_page() calls set_page_dirty()); but when allocated for
a read fault (which MAP_POPULATE simulates), no set_page_dirty().

Link: http://lkml.kernel.org/r/alpine.LSU.2.11.1807111741430.1106@eggly.anvils
Fixes: d21b9e57c74c ("thp: handle file pages in split_huge_pmd()")
Signed-off-by: Hugh Dickins <hughd@google.com>
Reported-by: Ashwin Chaugule <ashwinch@google.com>
Reviewed-by: Yang Shi <yang.shi@linux.alibaba.com>
Reviewed-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: "Huang, Ying" <ying.huang@intel.com>
Cc: <stable@vger.kernel.org> [4.8+]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
6 years agofat: fix memory allocation failure handling of match_strdup()
OGAWA Hirofumi [Sat, 21 Jul 2018 00:53:42 +0000 (17:53 -0700)]
fat: fix memory allocation failure handling of match_strdup()

In parse_options(), if match_strdup() failed, parse_options() leaves
opts->iocharset in unexpected state (i.e.  still pointing the freed
string).  And this can be the cause of double free.

To fix, this initialize opts->iocharset always when freeing.

Link: http://lkml.kernel.org/r/8736wp9dzc.fsf@mail.parknet.co.jp
Signed-off-by: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp>
Reported-by: syzbot+90b8e10515ae88228a92@syzkaller.appspotmail.com
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
6 years agoMAINTAINERS: Peter has moved
Peter Senna Tschudin [Sat, 21 Jul 2018 00:53:38 +0000 (17:53 -0700)]
MAINTAINERS: Peter has moved

Update my E-mail address in the MAINTAINERS file.

Link: http://lkml.kernel.org/r/20180710144702.1308-1-peter.senna@gmail.com
Signed-off-by: Peter Senna Tschudin <peter.senna@gmail.com>
Reviewed-by: Sebastian Reichel <sebastian.reichel@collabora.co.uk>
Acked-by: Martyn Welch <martyn.welch@collabora.co.uk>
Cc: David S. Miller <davem@davemloft.net>
Cc: Mauro Carvalho Chehab <mchehab+samsung@kernel.org>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Randy Dunlap <rdunlap@infradead.org>
Cc: Martin Donnelly <martin.donnelly@ge.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>