platform/kernel/linux-rpi.git
5 years agotests: kvm: Add tests for KVM_CAP_MAX_VCPUS and KVM_CAP_MAX_CPU_ID
Aaron Lewis [Thu, 2 May 2019 18:31:59 +0000 (11:31 -0700)]
tests: kvm: Add tests for KVM_CAP_MAX_VCPUS and KVM_CAP_MAX_CPU_ID

Signed-off-by: Aaron Lewis <aaronlewis@google.com>
Reviewed-by: Peter Shier <pshier@google.com>
Reviewed-by: Marc Orr <marcorr@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agotests: kvm: Add tests to .gitignore
Aaron Lewis [Mon, 6 May 2019 14:19:10 +0000 (07:19 -0700)]
tests: kvm: Add tests to .gitignore

Signed-off-by: Aaron Lewis <aaronlewis@google.com>
Reviewed-by: Peter Shier <pshier@google.com>
Reviewed-by: Jim Mattson <jmattson@google.com>
Reviewed-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: Introduce KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2
Peter Xu [Wed, 8 May 2019 09:15:47 +0000 (17:15 +0800)]
KVM: Introduce KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2

The previous KVM_CAP_MANUAL_DIRTY_LOG_PROTECT has some problem which
blocks the correct usage from userspace.  Obsolete the old one and
introduce a new capability bit for it.

Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: Fix kvm_clear_dirty_log_protect off-by-(minus-)one
Peter Xu [Wed, 8 May 2019 09:15:46 +0000 (17:15 +0800)]
KVM: Fix kvm_clear_dirty_log_protect off-by-(minus-)one

Just imaging the case where num_pages < BITS_PER_LONG, then the loop
will be skipped while it shouldn't.

Signed-off-by: Peter Xu <peterx@redhat.com>
Fixes: 2a31b9db153530df4aa02dac8c32837bf5f47019
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: Fix the bitmap range to copy during clear dirty
Peter Xu [Wed, 8 May 2019 09:15:45 +0000 (17:15 +0800)]
KVM: Fix the bitmap range to copy during clear dirty

kvm_dirty_bitmap_bytes() will return the size of the dirty bitmap of
the memslot rather than the size of bitmap passed over from the ioctl.
Here for KVM_CLEAR_DIRTY_LOG we should only copy exactly the size of
bitmap that covers kvm_clear_dirty_log.num_pages.

Signed-off-by: Peter Xu <peterx@redhat.com>
Cc: stable@vger.kernel.org
Fixes: 2a31b9db153530df4aa02dac8c32837bf5f47019
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: x86: use direct accessors for RIP and RSP
Paolo Bonzini [Tue, 30 Apr 2019 20:07:26 +0000 (22:07 +0200)]
KVM: x86: use direct accessors for RIP and RSP

Use specific inline functions for RIP and RSP instead of
going through kvm_register_read and kvm_register_write,
which are quite a mouthful.  kvm_rsp_read and kvm_rsp_write
did not exist, so add them.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: VMX: Use accessors for GPRs outside of dedicated caching logic
Sean Christopherson [Tue, 30 Apr 2019 17:36:19 +0000 (10:36 -0700)]
KVM: VMX: Use accessors for GPRs outside of dedicated caching logic

... now that there is no overhead when using dedicated accessors.

Opportunistically remove a bogus "FIXME" in handle_rdmsr() regarding
the upper 32 bits of RAX and RDX.  Zeroing the upper 32 bits is
architecturally correct as 32-bit writes in 64-bit mode unconditionally
clear the upper 32 bits.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: x86: Omit caching logic for always-available GPRs
Sean Christopherson [Tue, 30 Apr 2019 17:36:17 +0000 (10:36 -0700)]
KVM: x86: Omit caching logic for always-available GPRs

Except for RSP and RIP, which are held in VMX's VMCS, GPRs are always
treated "available and dirtly" on both VMX and SVM, i.e. are
unconditionally loaded/saved immediately before/after VM-Enter/VM-Exit.

Eliminating the unnecessary caching code reduces the size of KVM by a
non-trivial amount, much of which comes from the most common code paths.
E.g. on x86_64, kvm_emulate_cpuid() is reduced from 342 to 182 bytes and
kvm_emulate_hypercall() from 1362 to 1143, with the total size of KVM
dropping by ~1000 bytes.  With CONFIG_RETPOLINE=y, the numbers are even
more pronounced, e.g.: 353->182, 1418->1172 and well over 2000 bytes.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agokvm, x86: Properly check whether a pfn is an MMIO or not
KarimAllah Ahmed [Thu, 31 Jan 2019 20:24:44 +0000 (21:24 +0100)]
kvm, x86: Properly check whether a pfn is an MMIO or not

pfn_valid check is not sufficient because it only checks if a page has a struct
page or not, if "mem=" was passed to the kernel some valid pages won't have a
struct page. This means that if guests were assigned valid memory that lies
after the mem= boundary it will be passed uncached to the guest no matter what
the guest caching attributes are for this memory.

Introduce a new function e820__mapped_raw_any which is equivalent to
e820__mapped_any but uses the original e820 unmodified and use it to
identify real *RAM*.

Signed-off-by: KarimAllah Ahmed <karahmed@amazon.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM/nVMX: Use page_address_valid in a few more locations
KarimAllah Ahmed [Thu, 31 Jan 2019 20:24:43 +0000 (21:24 +0100)]
KVM/nVMX: Use page_address_valid in a few more locations

Use page_address_valid in a few more locations that is already checking for
a page aligned address that does not cross the maximum physical address.

Signed-off-by: KarimAllah Ahmed <karahmed@amazon.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM/nVMX: Use kvm_vcpu_map for accessing the enlightened VMCS
KarimAllah Ahmed [Thu, 31 Jan 2019 20:24:42 +0000 (21:24 +0100)]
KVM/nVMX: Use kvm_vcpu_map for accessing the enlightened VMCS

Use kvm_vcpu_map for accessing the enlightened VMCS since using
kvm_vcpu_gpa_to_page() and kmap() will only work for guest memory that has
a "struct page".

Signed-off-by: KarimAllah Ahmed <karahmed@amazon.de>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM/nVMX: Use kvm_vcpu_map for accessing the shadow VMCS
KarimAllah Ahmed [Thu, 31 Jan 2019 20:24:41 +0000 (21:24 +0100)]
KVM/nVMX: Use kvm_vcpu_map for accessing the shadow VMCS

Use kvm_vcpu_map for accessing the shadow VMCS since using
kvm_vcpu_gpa_to_page() and kmap() will only work for guest memory that has
a "struct page".

Signed-off-by: KarimAllah Ahmed <karahmed@amazon.de>
Reviewed-by: Konrad Rzessutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM/nSVM: Use the new mapping API for mapping guest memory
KarimAllah Ahmed [Thu, 31 Jan 2019 20:24:40 +0000 (21:24 +0100)]
KVM/nSVM: Use the new mapping API for mapping guest memory

Use the new mapping API for mapping guest memory to avoid depending on
"struct page".

Signed-off-by: KarimAllah Ahmed <karahmed@amazon.de>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM/X86: Use kvm_vcpu_map in emulator_cmpxchg_emulated
KarimAllah Ahmed [Thu, 31 Jan 2019 20:24:39 +0000 (21:24 +0100)]
KVM/X86: Use kvm_vcpu_map in emulator_cmpxchg_emulated

Use kvm_vcpu_map in emulator_cmpxchg_emulated since using
kvm_vcpu_gpa_to_page() and kmap() will only work for guest memory that has
a "struct page".

Signed-off-by: KarimAllah Ahmed <karahmed@amazon.de>
Reviewed-by: Konrad Rzeszutek Wilk <kjonrad.wilk@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM/nVMX: Use kvm_vcpu_map when mapping the posted interrupt descriptor table
KarimAllah Ahmed [Thu, 31 Jan 2019 20:24:38 +0000 (21:24 +0100)]
KVM/nVMX: Use kvm_vcpu_map when mapping the posted interrupt descriptor table

Use kvm_vcpu_map when mapping the posted interrupt descriptor table since
using kvm_vcpu_gpa_to_page() and kmap() will only work for guest memory
that has a "struct page".

One additional semantic change is that the virtual host mapping lifecycle
has changed a bit. It now has the same lifetime of the pinning of the
interrupt descriptor table page on the host side.

Signed-off-by: KarimAllah Ahmed <karahmed@amazon.de>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM/nVMX: Use kvm_vcpu_map when mapping the virtual APIC page
KarimAllah Ahmed [Thu, 31 Jan 2019 20:24:37 +0000 (21:24 +0100)]
KVM/nVMX: Use kvm_vcpu_map when mapping the virtual APIC page

Use kvm_vcpu_map when mapping the virtual APIC page since using
kvm_vcpu_gpa_to_page() and kmap() will only work for guest memory that has
a "struct page".

One additional semantic change is that the virtual host mapping lifecycle
has changed a bit. It now has the same lifetime of the pinning of the
virtual APIC page on the host side.

Signed-off-by: KarimAllah Ahmed <karahmed@amazon.de>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM/nVMX: Use kvm_vcpu_map when mapping the L1 MSR bitmap
KarimAllah Ahmed [Thu, 31 Jan 2019 20:24:36 +0000 (21:24 +0100)]
KVM/nVMX: Use kvm_vcpu_map when mapping the L1 MSR bitmap

Use kvm_vcpu_map when mapping the L1 MSR bitmap since using
kvm_vcpu_gpa_to_page() and kmap() will only work for guest memory that has
a "struct page".

Signed-off-by: KarimAllah Ahmed <karahmed@amazon.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoX86/nVMX: handle_vmptrld: Use kvm_vcpu_map when copying VMCS12 from guest memory
KarimAllah Ahmed [Thu, 31 Jan 2019 20:24:35 +0000 (21:24 +0100)]
X86/nVMX: handle_vmptrld: Use kvm_vcpu_map when copying VMCS12 from guest memory

Use kvm_vcpu_map to the map the VMCS12 from guest memory because
kvm_vcpu_gpa_to_page() and kmap() will only work for guest memory that has
a "struct page".

Signed-off-by: KarimAllah Ahmed <karahmed@amazon.de>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: Introduce a new guest mapping API
KarimAllah Ahmed [Thu, 31 Jan 2019 20:24:34 +0000 (21:24 +0100)]
KVM: Introduce a new guest mapping API

In KVM, specially for nested guests, there is a dominant pattern of:

=> map guest memory -> do_something -> unmap guest memory

In addition to all this unnecessarily noise in the code due to boiler plate
code, most of the time the mapping function does not properly handle memory
that is not backed by "struct page". This new guest mapping API encapsulate
most of this boiler plate code and also handles guest memory that is not
backed by "struct page".

The current implementation of this API is using memremap for memory that is
not backed by a "struct page" which would lead to a huge slow-down if it
was used for high-frequency mapping operations. The API does not have any
effect on current setups where guest memory is backed by a "struct page".
Further patches are going to also introduce a pfn-cache which would
significantly improve the performance of the memremap case.

Signed-off-by: KarimAllah Ahmed <karahmed@amazon.de>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoX86/KVM: Handle PFNs outside of kernel reach when touching GPTEs
Filippo Sironi [Thu, 31 Jan 2019 20:24:33 +0000 (21:24 +0100)]
X86/KVM: Handle PFNs outside of kernel reach when touching GPTEs

cmpxchg_gpte() calls get_user_pages_fast() to retrieve the number of
pages and the respective struct page to map in the kernel virtual
address space.
This doesn't work if get_user_pages_fast() is invoked with a userspace
virtual address that's backed by PFNs outside of kernel reach (e.g., when
limiting the kernel memory with mem= in the command line and using
/dev/mem to map memory).

If get_user_pages_fast() fails, look up the VMA that back the userspace
virtual address, compute the PFN and the physical address, and map it in
the kernel virtual address space with memremap().

Signed-off-by: Filippo Sironi <sironi@amazon.de>
Signed-off-by: KarimAllah Ahmed <karahmed@amazon.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoX86/nVMX: Update the PML table without mapping and unmapping the page
KarimAllah Ahmed [Thu, 31 Jan 2019 20:24:32 +0000 (21:24 +0100)]
X86/nVMX: Update the PML table without mapping and unmapping the page

Update the PML table without mapping and unmapping the page. This also
avoids using kvm_vcpu_gpa_to_page(..) which assumes that there is a "struct
page" for guest memory.

As a side-effect of using kvm_write_guest_page the page is also properly
marked as dirty.

Signed-off-by: KarimAllah Ahmed <karahmed@amazon.de>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoX86/nVMX: handle_vmon: Read 4 bytes from guest memory
KarimAllah Ahmed [Thu, 31 Jan 2019 20:24:31 +0000 (21:24 +0100)]
X86/nVMX: handle_vmon: Read 4 bytes from guest memory

Read the data directly from guest memory instead of the map->read->unmap
sequence. This also avoids using kvm_vcpu_gpa_to_page() and kmap() which
assumes that there is a "struct page" for guest memory.

Suggested-by: Jim Mattson <jmattson@google.com>
Signed-off-by: KarimAllah Ahmed <karahmed@amazon.de>
Reviewed-by: Jim Mattson <jmattson@google.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agox86/kvm: Implement HWCR support
Borislav Petkov [Thu, 18 Apr 2019 16:32:50 +0000 (18:32 +0200)]
x86/kvm: Implement HWCR support

The hardware configuration register has some useful bits which can be
used by guests. Implement McStatusWrEn which can be used by guests when
injecting MCEs with the in-kernel mce-inject module.

For that, we need to set bit 18 - McStatusWrEn - first, before writing
the MCi_STATUS registers (otherwise we #GP).

Add the required machinery to do so.

Signed-off-by: Borislav Petkov <bp@suse.de>
Cc: Jim Mattson <jmattson@google.com>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: KVM <kvm@vger.kernel.org>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Cc: Sean Christopherson <sean.j.christopherson@intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Yazen Ghannam <Yazen.Ghannam@amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: VMX: Include architectural defs header in capabilities.h
Sean Christopherson [Thu, 18 Apr 2019 15:07:40 +0000 (08:07 -0700)]
KVM: VMX: Include architectural defs header in capabilities.h

The capabilities header depends on asm/vmx.h but doesn't explicitly
include said file.  This currently doesn't cause problems as all users
of capbilities.h first include asm/vmx.h, but the issue often results in
build errors if someone starts moving things around the VMX files.

Fixes: 3077c1910882 ("KVM: VMX: Move capabilities structs and helpers to dedicated file")
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: vmx: clean up some debug output
Dan Carpenter [Wed, 24 Apr 2019 10:15:08 +0000 (13:15 +0300)]
KVM: vmx: clean up some debug output

Smatch complains about this:

    arch/x86/kvm/vmx/vmx.c:5730 dump_vmcs()
    warn: KERN_* level not at start of string

The code should be using pr_cont() instead of pr_err().

Fixes: 9d609649bb29 ("KVM: vmx: print more APICv fields in dump_vmcs")
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agokvm_main: fix some comments
Jiang Biao [Tue, 23 Apr 2019 11:40:30 +0000 (19:40 +0800)]
kvm_main: fix some comments

is_dirty has been renamed to flush, but the comment for it is
outdated. And the description about @flush parameter for
kvm_clear_dirty_log_protect() is missing, add it in this patch
as well.

Signed-off-by: Jiang Biao <benbjiang@tencent.com>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: fix KVM_CLEAR_DIRTY_LOG for memory slots of unaligned size
Paolo Bonzini [Wed, 17 Apr 2019 13:28:44 +0000 (15:28 +0200)]
KVM: fix KVM_CLEAR_DIRTY_LOG for memory slots of unaligned size

If a memory slot's size is not a multiple of 64 pages (256K), then
the KVM_CLEAR_DIRTY_LOG API is unusable: clearing the final 64 pages
either requires the requested page range to go beyond memslot->npages,
or requires log->num_pages to be unaligned, and kvm_clear_dirty_log_protect
requires log->num_pages to be both in range and aligned.

To allow this case, allow log->num_pages not to be a multiple of 64 if
it ends exactly on the last page of the slot.

Reported-by: Peter Xu <peterx@redhat.com>
Fixes: 98938aa8edd6 ("KVM: validate userspace input in kvm_clear_dirty_log_protect()", 2019-01-02)
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: VMX: Skip delta_tsc shift-and-divide if the dividend is zero
Sean Christopherson [Tue, 16 Apr 2019 20:32:48 +0000 (13:32 -0700)]
KVM: VMX: Skip delta_tsc shift-and-divide if the dividend is zero

Ten percent of nothin' is... let me do the math here.  Nothin' into
nothin', carry the nothin'...

Cc: Wanpeng Li <wanpengli@tencent.com>
Reviewed-by: Liran Alon <liran.alon@oracle.com>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: lapic: Check for a pending timer intr prior to start_hv_timer()
Sean Christopherson [Tue, 16 Apr 2019 20:32:47 +0000 (13:32 -0700)]
KVM: lapic: Check for a pending timer intr prior to start_hv_timer()

Checking for a pending non-periodic interrupt in start_hv_timer() leads
to restart_apic_timer() making an unnecessary call to start_sw_timer()
due to start_hv_timer() returning false.

Alternatively, start_hv_timer() could return %true when there is a
pending non-periodic interrupt, but that approach is less intuitive,
i.e. would require a beefy comment to explain an otherwise simple check.

Cc: Liran Alon <liran.alon@oracle.com>
Cc: Wanpeng Li <wanpengli@tencent.com>
Suggested-by: Liran Alon <liran.alon@oracle.com>
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: lapic: Refactor ->set_hv_timer to use an explicit expired param
Sean Christopherson [Tue, 16 Apr 2019 20:32:46 +0000 (13:32 -0700)]
KVM: lapic: Refactor ->set_hv_timer to use an explicit expired param

Refactor kvm_x86_ops->set_hv_timer to use an explicit parameter for
stating that the timer has expired.  Overloading the return value is
unnecessarily clever, e.g. can lead to confusion over the proper return
value from start_hv_timer() when r==1.

Cc: Wanpeng Li <wanpengli@tencent.com>
Cc: Liran Alon <liran.alon@oracle.com>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: lapic: Explicitly cancel the hv timer if it's pre-expired
Sean Christopherson [Tue, 16 Apr 2019 20:32:45 +0000 (13:32 -0700)]
KVM: lapic: Explicitly cancel the hv timer if it's pre-expired

Explicitly call cancel_hv_timer() instead of returning %false to coerce
restart_apic_timer() into canceling it by way of start_sw_timer().

Functionally, the existing code is correct in the sense that it doesn't
doing anything visibily wrong, e.g. generate spurious interrupts or miss
an interrupt.  But it's extremely confusing and inefficient, e.g. there
are multiple extraneous calls to apic_timer_expired() that effectively
get dropped due to @timer_pending being %true.

Cc: Wanpeng Li <wanpengli@tencent.com>
Cc: Liran Alon <liran.alon@oracle.com>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: lapic: Busy wait for timer to expire when using hv_timer
Sean Christopherson [Tue, 16 Apr 2019 20:32:44 +0000 (13:32 -0700)]
KVM: lapic: Busy wait for timer to expire when using hv_timer

...now that VMX's preemption timer, i.e. the hv_timer, also adjusts its
programmed time based on lapic_timer_advance_ns.  Without the delay, a
guest can see a timer interrupt arrive before the requested time when
KVM is using the hv_timer to emulate the guest's interrupt.

Fixes: c5ce8235cffa0 ("KVM: VMX: Optimize tscdeadline timer latency")
Cc: <stable@vger.kernel.org>
Cc: Wanpeng Li <wanpengli@tencent.com>
Reviewed-by: Liran Alon <liran.alon@oracle.com>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: VMX: Nop emulation of MSR_IA32_POWER_CTL
Liran Alon [Mon, 15 Apr 2019 15:45:26 +0000 (18:45 +0300)]
KVM: VMX: Nop emulation of MSR_IA32_POWER_CTL

Since commits 668fffa3f838 ("kvm: better MWAIT emulation for guestsâ€)
and 4d5422cea3b6 ("KVM: X86: Provide a capability to disable MWAIT interceptsâ€),
KVM was modified to allow an admin to configure certain guests to execute
MONITOR/MWAIT inside guest without being intercepted by host.

This is useful in case admin wishes to allocate a dedicated logical
processor for each vCPU thread. Thus, making it safe for guest to
completely control the power-state of the logical processor.

The ability to use this new KVM capability was introduced to QEMU by
commits 6f131f13e68d ("kvm: support -overcommit cpu-pm=on|offâ€) and
2266d4431132 ("i386/cpu: make -cpu host support monitor/mwaitâ€).

However, exposing MONITOR/MWAIT to a Linux guest may cause it's intel_idle
kernel module to execute c1e_promotion_disable() which will attempt to
RDMSR/WRMSR from/to MSR_IA32_POWER_CTL to manipulate the "C1E Enable"
bit. This behaviour was introduced by commit
32e9518005c8 ("intel_idle: export both C1 and C1Eâ€).

Becuase KVM doesn't emulate this MSR, running KVM with ignore_msrs=0
will cause the above guest behaviour to raise a #GP which will cause
guest to kernel panic.

Therefore, add support for nop emulation of MSR_IA32_POWER_CTL to
avoid #GP in guest in this scenario.

Future commits can optimise emulation further by reflecting guest
MSR changes to host MSR to provide guest with the ability to
fine-tune the dedicated logical processor power-state.

Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Signed-off-by: Liran Alon <liran.alon@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: x86: Add support of clear Trace_ToPA_PMI status
Luwei Kang [Tue, 19 Feb 2019 00:26:08 +0000 (19:26 -0500)]
KVM: x86: Add support of clear Trace_ToPA_PMI status

Let guests clear the Intel PT ToPA PMI status (bit 55 of
MSR_CORE_PERF_GLOBAL_OVF_CTRL).

Signed-off-by: Luwei Kang <luwei.kang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: x86: Inject PMI for KVM guest
Luwei Kang [Tue, 19 Feb 2019 00:26:07 +0000 (19:26 -0500)]
KVM: x86: Inject PMI for KVM guest

Inject a PMI for KVM guest when Intel PT working
in Host-Guest mode and Guest ToPA entry memory buffer
was completely filled.

Signed-off-by: Luwei Kang <luwei.kang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoRevert "KVM: doc: Document the life cycle of a VM and its resources"
Radim Krčmář [Mon, 29 Apr 2019 13:25:35 +0000 (15:25 +0200)]
Revert "KVM: doc: Document the life cycle of a VM and its resources"

This reverts commit 919f6cd8bb2fe7151f8aecebc3b3d1ca2567396e.

The patch was applied twice.
The first commit is eca6be566d47029f945a5f8e1c94d374e31df2ca.

Reported-by: Cornelia Huck <cohuck@redhat.com>
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoMerge tag 'kvm-s390-next-5.2-1' of git://git.kernel.org/pub/scm/linux/kernel/git...
Paolo Bonzini [Tue, 30 Apr 2019 19:29:14 +0000 (21:29 +0200)]
Merge tag 'kvm-s390-next-5.2-1' of git://git./linux/kernel/git/kvms390/linux into HEAD

KVM: s390: Features and fixes for 5.2

- VSIE crypto fixes
- new guest features for gen15
- disable halt polling for nested virtualization with overcommit

5 years agoKVM: s390: vsie: Return correct values for Invalid CRYCB format
Pierre Morel [Fri, 26 Apr 2019 09:00:01 +0000 (11:00 +0200)]
KVM: s390: vsie: Return correct values for Invalid CRYCB format

Let's use the correct validity number.

Fixes: 56019f9aca22 ("KVM: s390: vsie: Allow CRYCB FORMAT-2")

Signed-off-by: Pierre Morel <pmorel@linux.ibm.com>
Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com>
Message-Id: <1556269201-22918-1-git-send-email-pmorel@linux.ibm.com>
Acked-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
5 years agoKVM: s390: vsie: Do not shadow CRYCB when no AP and no keys
Pierre Morel [Fri, 26 Apr 2019 08:56:50 +0000 (10:56 +0200)]
KVM: s390: vsie: Do not shadow CRYCB when no AP and no keys

When the guest do not have AP instructions nor Key management
we should return without shadowing the CRYCB.

We did not check correctly in the past.

Fixes: b10bd9a256ae ("s390: vsie: Use effective CRYCBD.31 to check CRYCBD validity")
Fixes: 6ee74098201b ("KVM: s390: vsie: allow CRYCB FORMAT-0")

Signed-off-by: Pierre Morel <pmorel@linux.ibm.com>
Reported-by: Christian Borntraeger <borntraeger@de.ibm.com>
Message-Id: <1556269010-22258-1-git-send-email-pmorel@linux.ibm.com>
Acked-by: David Hildenbrand <david@redhat.com>
Tested-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
5 years agoKVM: s390: provide kvm_arch_no_poll function
Christian Borntraeger [Tue, 5 Mar 2019 10:30:02 +0000 (05:30 -0500)]
KVM: s390: provide kvm_arch_no_poll function

We do track the current steal time of the host CPUs. Let us use
this value to disable halt polling if the steal time goes beyond
a configured value.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
5 years agoKVM: polling: add architecture backend to disable polling
Christian Borntraeger [Tue, 5 Mar 2019 10:30:01 +0000 (05:30 -0500)]
KVM: polling: add architecture backend to disable polling

There are cases where halt polling is unwanted. For example when running
KVM on an over committed LPAR we rather want to give back the CPU to
neighbour LPARs instead of polling. Let us provide a callback that
allows architectures to disable polling.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
5 years agoKVM: s390: enable MSA9 keywrapping functions depending on cpu model
Christian Borntraeger [Wed, 3 Apr 2019 07:00:35 +0000 (03:00 -0400)]
KVM: s390: enable MSA9 keywrapping functions depending on cpu model

Instead of adding a new machine option to disable/enable the keywrapping
options of pckmo (like for AES and DEA) we can now use the CPU model to
decide. As ECC is also wrapped with the AES key we need that to be
enabled.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
5 years agoKVM: s390: add deflate conversion facilty to cpu model
Christian Borntraeger [Fri, 28 Dec 2018 09:46:04 +0000 (10:46 +0100)]
KVM: s390: add deflate conversion facilty to cpu model

This enables stfle.151 and adds the subfunctions for DFLTCC. Bit 151 is
added to the list of facilities that will be enabled when there is no
cpu model involved as DFLTCC requires no additional handling from
userspace, e.g. for migration.

Please note that a cpu model enabled user space can and will have the
final decision on the facility bits for a guests.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: Collin Walling <walling@linux.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
5 years agoKVM: s390: add enhanced sort facilty to cpu model
Christian Borntraeger [Fri, 28 Dec 2018 09:59:06 +0000 (10:59 +0100)]
KVM: s390: add enhanced sort facilty to cpu model

This enables stfle.150 and adds the subfunctions for SORTL. Bit 150 is
added to the list of facilities that will be enabled when there is no
cpu model involved as sortl requires no additional handling from
userspace, e.g. for migration.

Please note that a cpu model enabled user space can and will have the
final decision on the facility bits for a guests.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: Collin Walling <walling@linux.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
5 years agoKVM: s390: provide query function for instructions returning 32 byte
Christian Borntraeger [Wed, 20 Feb 2019 08:04:07 +0000 (03:04 -0500)]
KVM: s390: provide query function for instructions returning 32 byte

Some of the new features have a 32byte response for the query function.
Provide a new wrapper similar to __cpacf_query. We might want to factor
this out if other users come up, as of today there is none. So let us
keep the function within KVM.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: Collin Walling <walling@linux.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
5 years agoKVM: s390: add MSA9 to cpumodel
Christian Borntraeger [Fri, 28 Dec 2018 08:33:35 +0000 (09:33 +0100)]
KVM: s390: add MSA9 to cpumodel

This enables stfle.155 and adds the subfunctions for KDSA. Bit 155 is
added to the list of facilities that will be enabled when there is no
cpu model involved as MSA9 requires no additional handling from
userspace, e.g. for migration.

Please note that a cpu model enabled user space can and will have the
final decision on the facility bits for a guests.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Acked-by: Janosch Frank <frankja@linux.ibm.com>
Reviewed-by: Collin Walling <walling@linux.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
5 years agoKVM: s390: add vector BCD enhancements facility to cpumodel
Christian Borntraeger [Fri, 28 Dec 2018 08:45:58 +0000 (09:45 +0100)]
KVM: s390: add vector BCD enhancements facility to cpumodel

If vector support is enabled, the vector BCD enhancements facility
might also be enabled.
We can directly forward this facility to the guest if available
and VX is requested by user space.

Please note that user space can and will have the final decision
on the facility bits for a guests.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: Janosch Frank <frankja@linux.ibm.com>
Reviewed-by: Collin Walling <walling@linux.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
5 years agoKVM: s390: add vector enhancements facility 2 to cpumodel
Christian Borntraeger [Fri, 28 Dec 2018 08:43:37 +0000 (09:43 +0100)]
KVM: s390: add vector enhancements facility 2 to cpumodel

If vector support is enabled, the vector enhancements facility 2
might also be enabled.
We can directly forward this facility to the guest if available
and VX is requested by user space.

Please note that user space can and will have the final decision
on the facility bits for a guests.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: Janosch Frank <frankja@linux.ibm.com>
Reviewed-by: Collin Walling <walling@linux.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
5 years agoKVM: s390: Fix potential spectre warnings
Eric Farman [Wed, 17 Apr 2019 00:54:14 +0000 (02:54 +0200)]
KVM: s390: Fix potential spectre warnings

Fix some warnings from smatch:

arch/s390/kvm/interrupt.c:2310 get_io_adapter() warn: potential spectre issue 'kvm->arch.adapters' [r] (local cap)
arch/s390/kvm/interrupt.c:2341 register_io_adapter() warn: potential spectre issue 'dev->kvm->arch.adapters' [w]

Signed-off-by: Eric Farman <farman@linux.ibm.com>
Message-Id: <20190417005414.47801-1-farman@linux.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
5 years agokvm: move KVM_CAP_NR_MEMSLOTS to common code
Paolo Bonzini [Thu, 28 Mar 2019 16:24:03 +0000 (17:24 +0100)]
kvm: move KVM_CAP_NR_MEMSLOTS to common code

All architectures except MIPS were defining it in the same way,
and memory slots are handled entirely by common code so there
is no point in keeping the definition per-architecture.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: x86: Inject #GP if guest attempts to set unsupported EFER bits
Sean Christopherson [Tue, 2 Apr 2019 15:19:16 +0000 (08:19 -0700)]
KVM: x86: Inject #GP if guest attempts to set unsupported EFER bits

EFER.LME and EFER.NX are considered reserved if their respective feature
bits are not advertised to the guest.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: x86: Skip EFER vs. guest CPUID checks for host-initiated writes
Sean Christopherson [Tue, 2 Apr 2019 15:19:15 +0000 (08:19 -0700)]
KVM: x86: Skip EFER vs. guest CPUID checks for host-initiated writes

KVM allows userspace to violate consistency checks related to the
guest's CPUID model to some degree.  Generally speaking, userspace has
carte blanche when it comes to guest state so long as jamming invalid
state won't negatively affect the host.

Currently this is seems to be a non-issue as most of the interesting
EFER checks are missing, e.g. NX and LME, but those will be added
shortly.  Proactively exempt userspace from the CPUID checks so as not
to break userspace.

Note, the efer_reserved_bits check still applies to userspace writes as
that mask reflects the host's capabilities, e.g. KVM shouldn't allow a
guest to run with NX=1 if it has been disabled in the host.

Fixes: d80174745ba39 ("KVM: SVM: Only allow setting of EFER_SVME when CPUID SVM is set")
Cc: stable@vger.kernel.org
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: nVMX: Return -EINVAL when signaling failure in VM-Entry helpers
Sean Christopherson [Thu, 11 Apr 2019 19:18:09 +0000 (12:18 -0700)]
KVM: nVMX: Return -EINVAL when signaling failure in VM-Entry helpers

Most, but not all, helpers that are related to emulating consistency
checks for nested VM-Entry return -EINVAL when a check fails.  Convert
the holdouts to have consistency throughout and to make it clear that
the functions are signaling pass/fail as opposed to "resume guest" vs.
"exit to userspace".

Opportunistically fix bad indentation in nested_vmx_check_guest_state().

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: nVMX: Return -EINVAL when signaling failure in pre-VM-Entry helpers
Paolo Bonzini [Fri, 12 Apr 2019 08:19:57 +0000 (10:19 +0200)]
KVM: nVMX: Return -EINVAL when signaling failure in pre-VM-Entry helpers

Convert all top-level nested VM-Enter consistency check functions to
return 0/-EINVAL instead of failure codes, since now they can only
ever return one failure code.

This also does not give the false impression that failure information is
always consumed and/or relevant, e.g. vmx_set_nested_state() only
cares whether or not the checks were successful.

nested_check_host_control_regs() can also now be inlined into its caller,
nested_vmx_check_host_state, since the two have effectively become the
same function.

Based on a patch by Sean Christopherson.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: nVMX: Rename and split top-level consistency checks to match SDM
Sean Christopherson [Thu, 11 Apr 2019 19:18:06 +0000 (12:18 -0700)]
KVM: nVMX: Rename and split top-level consistency checks to match SDM

Rename the top-level consistency check functions to (loosely) align with
the SDM.  Historically, KVM has used the terms "prereq" and "postreq" to
differentiate between consistency checks that lead to VM-Fail and those
that lead to VM-Exit.  The terms are vague and potentially misleading,
e.g. "postreq" might be interpreted as occurring after VM-Entry.

Note, while the SDM lumps controls and host state into a single section,
"Checks on VMX Controls and Host-State Area", split them into separate
top-level functions as the two categories of checks result in different
VM instruction errors.  This split will allow for additional cleanup.

Note #2, "vmentry" is intentionally dropped from the new function names
to avoid confusion with nested_check_vm_entry_controls(), and to keep
the length of the functions names somewhat manageable.

Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Reviewed-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: nVMX: Move guest non-reg state checks to VM-Exit path
Sean Christopherson [Thu, 11 Apr 2019 19:18:05 +0000 (12:18 -0700)]
KVM: nVMX: Move guest non-reg state checks to VM-Exit path

Per Intel's SDM, volume 3, section Checking and Loading Guest State:

  Because the checking and the loading occur concurrently, a failure may
  be discovered only after some state has been loaded. For this reason,
  the logical processor responds to such failures by loading state from
  the host-state area, as it would for a VM exit.

In other words, a failed non-register state consistency check results in
a VM-Exit, not VM-Fail.  Moving the non-reg state checks also paves the
way for renaming nested_vmx_check_vmentry_postreqs() to align with the
SDM, i.e. nested_vmx_check_vmentry_guest_state().

Fixes: 26539bd0e446a ("KVM: nVMX: check vmcs12 for valid activity state")
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Reviewed-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agokvm: nVMX: Check "load IA32_PAT" VM-entry control on vmentry
Krish Sadhukhan [Mon, 8 Apr 2019 21:35:12 +0000 (17:35 -0400)]
kvm: nVMX: Check "load IA32_PAT" VM-entry control on vmentry

According to section "Checking and Loading Guest State" in Intel SDM vol
3C, the following check is performed on vmentry:

    If the "load IA32_PAT" VM-entry control is 1, the value of the field
    for the IA32_PAT MSR must be one that could be written by WRMSR
    without fault at CPL 0. Specifically, each of the 8 bytes in the
    field must have one of the values 0 (UC), 1 (WC), 4 (WT), 5 (WP),
    6 (WB), or 7 (UC-).

Signed-off-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>
Reviewed-by: Karl Heubaum <karl.heubaum@oracle.com>
Suggested-by: Sean Christopherson <sean.j.christopherson@intel.com>
Reviewed-by: Sean Christopherson <sean.j.christopherson@intel.com>
Reviewed-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agokvm: nVMX: Check "load IA32_PAT" VM-exit control on vmentry
Krish Sadhukhan [Mon, 8 Apr 2019 21:35:11 +0000 (17:35 -0400)]
kvm: nVMX: Check "load IA32_PAT" VM-exit control on vmentry

According to section "Checks on Host Control Registers and MSRs" in Intel
SDM vol 3C, the following check is performed on vmentry:

    If the "load IA32_PAT" VM-exit control is 1, the value of the field
    for the IA32_PAT MSR must be one that could be written by WRMSR
    without fault at CPL 0. Specifically, each of the 8 bytes in the
    field must have one of the values 0 (UC), 1 (WC), 4 (WT), 5 (WP),
    6 (WB), or 7 (UC-).

Signed-off-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>
Reviewed-by: Karl Heubaum <karl.heubaum@oracle.com>
Suggested-by: Sean Christopherson <sean.j.christopherson@intel.com>
Reviewed-by: Sean Christopherson <sean.j.christopherson@intel.com>
Reviewed-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: x86: optimize check for valid PAT value
Paolo Bonzini [Wed, 10 Apr 2019 09:41:40 +0000 (11:41 +0200)]
KVM: x86: optimize check for valid PAT value

This check will soon be done on every nested vmentry and vmexit,
"parallelize" it using bitwise operations.

Reviewed-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: x86: clear VM_EXIT_SAVE_IA32_PAT
Paolo Bonzini [Wed, 10 Apr 2019 09:38:30 +0000 (11:38 +0200)]
KVM: x86: clear VM_EXIT_SAVE_IA32_PAT

This is not needed, PAT writes always take an MSR vmexit.

Reviewed-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: vmx: print more APICv fields in dump_vmcs
Paolo Bonzini [Mon, 15 Apr 2019 13:14:32 +0000 (15:14 +0200)]
KVM: vmx: print more APICv fields in dump_vmcs

The SVI, RVI, virtual-APIC page address and APIC-access page address fields
were left out of dump_vmcs.  Add them.

KERN_CONT technically isn't SMP safe, but it's okay to use it here since
the whole of dump_vmcs() is a single huge multi-line piece of output
that isn't SMP-safe.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: x86: avoid misreporting level-triggered irqs as edge-triggered in tracing
Vitaly Kuznetsov [Wed, 27 Mar 2019 14:12:20 +0000 (15:12 +0100)]
KVM: x86: avoid misreporting level-triggered irqs as edge-triggered in tracing

In __apic_accept_irq() interface trig_mode is int and actually on some code
paths it is set above u8:

kvm_apic_set_irq() extracts it from 'struct kvm_lapic_irq' where trig_mode
is u16. This is done on purpose as e.g. kvm_set_msi_irq() sets it to
(1 << 15) & e->msi.data

kvm_apic_local_deliver sets it to reg & (1 << 15).

Fix the immediate issue by making 'tm' into u16. We may also want to adjust
__apic_accept_irq() interface and use proper sizes for vector, level,
trig_mode but this is not urgent.

Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: fix spectrev1 gadgets
Paolo Bonzini [Thu, 11 Apr 2019 09:16:47 +0000 (11:16 +0200)]
KVM: fix spectrev1 gadgets

These were found with smatch, and then generalized when applicable.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: x86: fix warning Using plain integer as NULL pointer
Hariprasad Kelam [Sat, 6 Apr 2019 09:36:58 +0000 (15:06 +0530)]
KVM: x86: fix warning Using plain integer as NULL pointer

Changed passing argument as "0 to NULL" which resolves below sparse warning

arch/x86/kvm/x86.c:3096:61: warning: Using plain integer as NULL pointer

Signed-off-by: Hariprasad Kelam <hariprasad.kelam@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoselftests: kvm: add a selftest for SMM
Vitaly Kuznetsov [Wed, 10 Apr 2019 09:38:33 +0000 (11:38 +0200)]
selftests: kvm: add a selftest for SMM

Add a simple test for SMM, based on VMX.  The test implements its own
sync between the guest and the host as using our ucall library seems to
be too cumbersome: SMI handler is happening in real-address mode.

This patch also fixes KVM_SET_NESTED_STATE to happen after
KVM_SET_VCPU_EVENTS, in fact it places it last.  This is because
KVM needs to know whether the processor is in SMM or not.

Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoselftests: kvm: fix for compilers that do not support -no-pie
Paolo Bonzini [Thu, 11 Apr 2019 13:51:19 +0000 (15:51 +0200)]
selftests: kvm: fix for compilers that do not support -no-pie

-no-pie was added to GCC at the same time as their configuration option
--enable-default-pie.  Compilers that were built before do not have
-no-pie, but they also do not need it.  Detect the option at build
time.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoselftests: kvm/evmcs_test: complete I/O before migrating guest state
Paolo Bonzini [Thu, 11 Apr 2019 13:57:14 +0000 (15:57 +0200)]
selftests: kvm/evmcs_test: complete I/O before migrating guest state

Starting state migration after an IO exit without first completing IO
may result in test failures.  We already have two tests that need this
(this patch in fact fixes evmcs_test, similar to what was fixed for
state_test in commit 0f73bbc851ed, "KVM: selftests: complete IO before
migrating guest state", 2019-03-13) and a third is coming.  So, move the
code to vcpu_save_state, and while at it do not access register state
until after I/O is complete.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: x86: Always use 32-bit SMRAM save state for 32-bit kernels
Sean Christopherson [Tue, 2 Apr 2019 15:10:48 +0000 (08:10 -0700)]
KVM: x86: Always use 32-bit SMRAM save state for 32-bit kernels

Invoking the 64-bit variation on a 32-bit kenrel will crash the guest,
trigger a WARN, and/or lead to a buffer overrun in the host, e.g.
rsm_load_state_64() writes r8-r15 unconditionally, but enum kvm_reg and
thus x86_emulate_ctxt._regs only define r8-r15 for CONFIG_X86_64.

KVM allows userspace to report long mode support via CPUID, even though
the guest is all but guaranteed to crash if it actually tries to enable
long mode.  But, a pure 32-bit guest that is ignorant of long mode will
happily plod along.

SMM complicates things as 64-bit CPUs use a different SMRAM save state
area.  KVM handles this correctly for 64-bit kernels, e.g. uses the
legacy save state map if userspace has hid long mode from the guest,
but doesn't fare well when userspace reports long mode support on a
32-bit host kernel (32-bit KVM doesn't support 64-bit guests).

Since the alternative is to crash the guest, e.g. by not loading state
or explicitly requesting shutdown, unconditionally use the legacy SMRAM
save state map for 32-bit KVM.  If a guest has managed to get far enough
to handle SMIs when running under a weird/buggy userspace hypervisor,
then don't deliberately crash the guest since there are no downsides
(from KVM's perspective) to allow it to continue running.

Fixes: 660a5d517aaab ("KVM: x86: save/load state on SMM switch")
Cc: stable@vger.kernel.org
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: x86: Don't clear EFER during SMM transitions for 32-bit vCPU
Sean Christopherson [Tue, 2 Apr 2019 15:10:47 +0000 (08:10 -0700)]
KVM: x86: Don't clear EFER during SMM transitions for 32-bit vCPU

Neither AMD nor Intel CPUs have an EFER field in the legacy SMRAM save
state area, i.e. don't save/restore EFER across SMM transitions.  KVM
somewhat models this, e.g. doesn't clear EFER on entry to SMM if the
guest doesn't support long mode.  But during RSM, KVM unconditionally
clears EFER so that it can get back to pure 32-bit mode in order to
start loading CRs with their actual non-SMM values.

Clear EFER only when it will be written when loading the non-SMM state
so as to preserve bits that can theoretically be set on 32-bit vCPUs,
e.g. KVM always emulates EFER_SCE.

And because CR4.PAE is cleared only to play nice with EFER, wrap that
code in the long mode check as well.  Note, this may result in a
compiler warning about cr4 being consumed uninitialized.  Re-read CR4
even though it's technically unnecessary, as doing so allows for more
readable code and RSM emulation is not a performance critical path.

Fixes: 660a5d517aaab ("KVM: x86: save/load state on SMM switch")
Cc: stable@vger.kernel.org
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: x86: clear SMM flags before loading state while leaving SMM
Sean Christopherson [Tue, 2 Apr 2019 15:03:11 +0000 (08:03 -0700)]
KVM: x86: clear SMM flags before loading state while leaving SMM

RSM emulation is currently broken on VMX when the interrupted guest has
CR4.VMXE=1.  Stop dancing around the issue of HF_SMM_MASK being set when
loading SMSTATE into architectural state, e.g. by toggling it for
problematic flows, and simply clear HF_SMM_MASK prior to loading
architectural state (from SMRAM save state area).

Reported-by: Jon Doron <arilou@gmail.com>
Cc: Jim Mattson <jmattson@google.com>
Cc: Liran Alon <liran.alon@oracle.com>
Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
Fixes: 5bea5123cbf0 ("KVM: VMX: check nested state and CR4.VMXE against SMM")
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Tested-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: x86: Open code kvm_set_hflags
Sean Christopherson [Tue, 2 Apr 2019 15:03:10 +0000 (08:03 -0700)]
KVM: x86: Open code kvm_set_hflags

Prepare for clearing HF_SMM_MASK prior to loading state from the SMRAM
save state map, i.e. kvm_smm_changed() needs to be called after state
has been loaded and so cannot be done automatically when setting
hflags from RSM.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: x86: Load SMRAM in a single shot when leaving SMM
Sean Christopherson [Tue, 2 Apr 2019 15:03:09 +0000 (08:03 -0700)]
KVM: x86: Load SMRAM in a single shot when leaving SMM

RSM emulation is currently broken on VMX when the interrupted guest has
CR4.VMXE=1.  Rather than dance around the issue of HF_SMM_MASK being set
when loading SMSTATE into architectural state, ideally RSM emulation
itself would be reworked to clear HF_SMM_MASK prior to loading non-SMM
architectural state.

Ostensibly, the only motivation for having HF_SMM_MASK set throughout
the loading of state from the SMRAM save state area is so that the
memory accesses from GET_SMSTATE() are tagged with role.smm.  Load
all of the SMRAM save state area from guest memory at the beginning of
RSM emulation, and load state from the buffer instead of reading guest
memory one-by-one.

This paves the way for clearing HF_SMM_MASK prior to loading state,
and also aligns RSM with the enter_smm() behavior, which fills a
buffer and writes SMRAM save state in a single go.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: nVMX: Expose RDPMC-exiting only when guest supports PMU
Liran Alon [Mon, 25 Mar 2019 19:09:17 +0000 (21:09 +0200)]
KVM: nVMX: Expose RDPMC-exiting only when guest supports PMU

Issue was discovered when running kvm-unit-tests on KVM running as L1 on
top of Hyper-V.

When vmx_instruction_intercept unit-test attempts to run RDPMC to test
RDPMC-exiting, it is intercepted by L1 KVM which it's EXIT_REASON_RDPMC
handler raise #GP because vCPU exposed by Hyper-V doesn't support PMU.
Instead of unit-test expectation to be reflected with EXIT_REASON_RDPMC.

The reason vmx_instruction_intercept unit-test attempts to run RDPMC
even though Hyper-V doesn't support PMU is because L1 expose to L2
support for RDPMC-exiting. Which is reasonable to assume that is
supported only in case CPU supports PMU to being with.

Above issue can easily be simulated by modifying
vmx_instruction_intercept config in x86/unittests.cfg to run QEMU with
"-cpu host,+vmx,-pmu" and run unit-test.

To handle issue, change KVM to expose RDPMC-exiting only when guest
supports PMU.

Reported-by: Saar Amar <saaramar@microsoft.com>
Reviewed-by: Mihai Carabas <mihai.carabas@oracle.com>
Reviewed-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Liran Alon <liran.alon@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: x86: Raise #GP when guest vCPU do not support PMU
Liran Alon [Mon, 25 Mar 2019 19:10:17 +0000 (21:10 +0200)]
KVM: x86: Raise #GP when guest vCPU do not support PMU

Before this change, reading a VMware pseduo PMC will succeed even when
PMU is not supported by guest. This can easily be seen by running
kvm-unit-test vmware_backdoors with "-cpu host,-pmu" option.

Reviewed-by: Mihai Carabas <mihai.carabas@oracle.com>
Signed-off-by: Liran Alon <liran.alon@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agox86/kvm: move kvm_load/put_guest_xcr0 into atomic context
WANG Chao [Fri, 12 Apr 2019 07:55:39 +0000 (15:55 +0800)]
x86/kvm: move kvm_load/put_guest_xcr0 into atomic context

guest xcr0 could leak into host when MCE happens in guest mode. Because
do_machine_check() could schedule out at a few places.

For example:

kvm_load_guest_xcr0
...
kvm_x86_ops->run(vcpu) {
  vmx_vcpu_run
    vmx_complete_atomic_exit
      kvm_machine_check
        do_machine_check
          do_memory_failure
            memory_failure
              lock_page

In this case, host_xcr0 is 0x2ff, guest vcpu xcr0 is 0xff. After schedule
out, host cpu has guest xcr0 loaded (0xff).

In __switch_to {
     switch_fpu_finish
       copy_kernel_to_fpregs
         XRSTORS

If any bit i in XSTATE_BV[i] == 1 and xcr0[i] == 0, XRSTORS will
generate #GP (In this case, bit 9). Then ex_handler_fprestore kicks in
and tries to reinitialize fpu by restoring init fpu state. Same story as
last #GP, except we get DOUBLE FAULT this time.

Cc: stable@vger.kernel.org
Signed-off-by: WANG Chao <chao.wang@ucloud.cn>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: x86: svm: make sure NMI is injected after nmi_singlestep
Vitaly Kuznetsov [Wed, 3 Apr 2019 14:06:42 +0000 (16:06 +0200)]
KVM: x86: svm: make sure NMI is injected after nmi_singlestep

I noticed that apic test from kvm-unit-tests always hangs on my EPYC 7401P,
the hanging test nmi-after-sti is trying to deliver 30000 NMIs and tracing
shows that we're sometimes able to deliver a few but never all.

When we're trying to inject an NMI we may fail to do so immediately for
various reasons, however, we still need to inject it so enable_nmi_window()
arms nmi_singlestep mode. #DB occurs as expected, but we're not checking
for pending NMIs before entering the guest and unless there's a different
event to process, the NMI will never get delivered.

Make KVM_REQ_EVENT request on the vCPU from db_interception() to make sure
pending NMIs are checked and possibly injected.

Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Cc: stable@vger.kernel.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agosvm/avic: Fix invalidate logical APIC id entry
Suthikulpanit, Suravee [Tue, 26 Mar 2019 03:57:37 +0000 (03:57 +0000)]
svm/avic: Fix invalidate logical APIC id entry

Only clear the valid bit when invalidate logical APIC id entry.
The current logic clear the valid bit, but also set the rest of
the bits (including reserved bits) to 1.

Fixes: 98d90582be2e ('svm: Fix AVIC DFR and LDR handling')
Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Cc: stable@vger.kernel.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoRevert "svm: Fix AVIC incomplete IPI emulation"
Suthikulpanit, Suravee [Wed, 20 Mar 2019 08:12:28 +0000 (08:12 +0000)]
Revert "svm: Fix AVIC incomplete IPI emulation"

This reverts commit bb218fbcfaaa3b115d4cd7a43c0ca164f3a96e57.

As Oren Twaig pointed out the old discussion:

  https://patchwork.kernel.org/patch/8292231/

that the change coud potentially cause an extra IPI to be sent to
the destination vcpu because the AVIC hardware already set the IRR bit
before the incomplete IPI #VMEXIT with id=1 (target vcpu is not running).
Since writting to ICR and ICR2 will also set the IRR. If something triggers
the destination vcpu to get scheduled before the emulation finishes, then
this could result in an additional IPI.

Also, the issue mentioned in the commit bb218fbcfaaa was misdiagnosed.

Cc: Radim Krčmář <rkrcmar@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Reported-by: Oren Twaig <oren@scalemp.com>
Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Cc: stable@vger.kernel.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agokvm: mmu: Fix overflow on kvm mmu page limit calculation
Ben Gardon [Mon, 8 Apr 2019 18:07:30 +0000 (11:07 -0700)]
kvm: mmu: Fix overflow on kvm mmu page limit calculation

KVM bases its memory usage limits on the total number of guest pages
across all memslots. However, those limits, and the calculations to
produce them, use 32 bit unsigned integers. This can result in overflow
if a VM has more guest pages that can be represented by a u32. As a
result of this overflow, KVM can use a low limit on the number of MMU
pages it will allocate. This makes KVM unable to map all of guest memory
at once, prompting spurious faults.

Tested: Ran all kvm-unit-tests on an Intel Haswell machine. This patch
introduced no new failures.

Signed-off-by: Ben Gardon <bgardon@google.com>
Cc: stable@vger.kernel.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: nVMX: always use early vmcs check when EPT is disabled
Paolo Bonzini [Mon, 15 Apr 2019 13:57:19 +0000 (15:57 +0200)]
KVM: nVMX: always use early vmcs check when EPT is disabled

The remaining failures of vmx.flat when EPT is disabled are caused by
incorrectly reflecting VMfails to the L1 hypervisor.  What happens is
that nested_vmx_restore_host_state corrupts the guest CR3, reloading it
with the host's shadow CR3 instead, because it blindly loads GUEST_CR3
from the vmcs01.

For simplicity let's just always use hardware VMCS checks when EPT is
disabled.  This way, nested_vmx_restore_host_state is not reached at
all (or at least shouldn't be reached).

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: nVMX: allow tests to use bad virtual-APIC page address
Paolo Bonzini [Mon, 15 Apr 2019 13:16:17 +0000 (15:16 +0200)]
KVM: nVMX: allow tests to use bad virtual-APIC page address

As mentioned in the comment, there are some special cases where we can simply
clear the TPR shadow bit from the CPU-based execution controls in the vmcs02.
Handle them so that we can remove some XFAILs from vmx.flat.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: x86/mmu: Fix an inverted list_empty() check when zapping sptes
Sean Christopherson [Sat, 13 Apr 2019 02:55:41 +0000 (19:55 -0700)]
KVM: x86/mmu: Fix an inverted list_empty() check when zapping sptes

A recently introduced helper for handling zap vs. remote flush
incorrectly bails early, effectively leaking defunct shadow pages.
Manifests as a slab BUG when exiting KVM due to the shadow pages
being alive when their associated cache is destroyed.

==========================================================================
BUG kvm_mmu_page_header: Objects remaining in kvm_mmu_page_header on ...
--------------------------------------------------------------------------
Disabling lock debugging due to kernel taint
INFO: Slab 0x00000000fc436387 objects=26 used=23 fp=0x00000000d023caee ...
CPU: 6 PID: 4315 Comm: rmmod Tainted: G    B             5.1.0-rc2+ #19
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015
Call Trace:
 dump_stack+0x46/0x5b
 slab_err+0xad/0xd0
 ? on_each_cpu_mask+0x3c/0x50
 ? ksm_migrate_page+0x60/0x60
 ? on_each_cpu_cond_mask+0x7c/0xa0
 ? __kmalloc+0x1ca/0x1e0
 __kmem_cache_shutdown+0x13a/0x310
 shutdown_cache+0xf/0x130
 kmem_cache_destroy+0x1d5/0x200
 kvm_mmu_module_exit+0xa/0x30 [kvm]
 kvm_arch_exit+0x45/0x60 [kvm]
 kvm_exit+0x6f/0x80 [kvm]
 vmx_exit+0x1a/0x50 [kvm_intel]
 __x64_sys_delete_module+0x153/0x1f0
 ? exit_to_usermode_loop+0x88/0xc0
 do_syscall_64+0x4f/0x100
 entry_SYSCALL_64_after_hwframe+0x44/0xa9

Fixes: a21136345cb6f ("KVM: x86/mmu: Split remote_flush+zap case out of kvm_mmu_flush_or_zap()")
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoBluetooth: btusb: request wake pin with NOAUTOEN
Brian Norris [Tue, 9 Apr 2019 18:49:17 +0000 (11:49 -0700)]
Bluetooth: btusb: request wake pin with NOAUTOEN

Badly-designed systems might have (for example) active-high wake pins
that default to high (e.g., because of external pull ups) until they
have an active firmware which starts driving it low.  This can cause an
interrupt storm in the time between request_irq() and disable_irq().

We don't support shared interrupts here, so let's just pre-configure the
interrupt to avoid auto-enabling it.

Fixes: fd913ef7ce61 ("Bluetooth: btusb: Add out-of-band wakeup support")
Fixes: 5364a0b4f4be ("arm64: dts: rockchip: move QCA6174A wakeup pin into its USB node")
Signed-off-by: Brian Norris <briannorris@chromium.org>
Reviewed-by: Matthias Kaehlcke <mka@chromium.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
5 years agoMerge tag 'mips_fixes_5.1_2' of git://git.kernel.org/pub/scm/linux/kernel/git/mips...
Linus Torvalds [Wed, 10 Apr 2019 02:27:18 +0000 (16:27 -1000)]
Merge tag 'mips_fixes_5.1_2' of git://git./linux/kernel/git/mips/linux

Pull MIPS fixes from Paul Burton:
 "A few minor MIPS fixes:

   - Provide struct pt_regs * from get_irq_regs() to kgdb_nmicallback()
     when handling an IPI triggered by kgdb_roundup_cpus(), matching the
     behavior of other architectures & resolving kgdb issues for SMP
     systems.

   - Defer a pointer dereference until after a NULL check in the
     irq_shutdown callback for SGI IP27 HUB interrupts.

   - A defconfig update for the MSCC Ocelot to enable some necessary
     drivers"

* tag 'mips_fixes_5.1_2' of git://git.kernel.org/pub/scm/linux/kernel/git/mips/linux:
  MIPS: generic: Add switchdev, pinctrl and fit to ocelot_defconfig
  MIPS: SGI-IP27: Fix use of unchecked pointer in shutdown_bridge_irq
  MIPS: KGDB: fix kgdb support for SMP platforms.

5 years agoMerge branch 'fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs
Linus Torvalds [Wed, 10 Apr 2019 02:20:59 +0000 (16:20 -1000)]
Merge branch 'fixes' of git://git./linux/kernel/git/viro/vfs

Pull misc fixes from Al Viro:
 "A few regression fixes from this cycle"

* 'fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs:
  aio: use kmem_cache_free() instead of kfree()
  iov_iter: Fix build error without CONFIG_CRYPTO
  aio: Fix an error code in __io_submit_one()

5 years agoMerge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net
Linus Torvalds [Tue, 9 Apr 2019 03:10:46 +0000 (17:10 -1000)]
Merge git://git./linux/kernel/git/davem/net

Pull networking fixes from David Miller:

 1) Off by one and bounds checking fixes in NFC, from Dan Carpenter.

 2) There have been many weird regressions in r8169 since we turned ASPM
    support on, some are still not understood nor completely resolved.
    Let's turn this back off for now. From Heiner Kallweit.

 3) Signess fixes for ethtool speed value handling, from Michael
    Zhivich.

 4) Handle timestamps properly in macb driver, from Paul Thomas.

 5) Two erspan fixes, it's the usual "skb ->data potentially reallocated
    and we're holding a stale protocol header pointer". From Lorenzo
    Bianconi.

* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net:
  bnxt_en: Reset device on RX buffer errors.
  bnxt_en: Improve RX consumer index validity check.
  net: macb driver, check for SKBTX_HW_TSTAMP
  qlogic: qlcnic: fix use of SPEED_UNKNOWN ethtool constant
  broadcom: tg3: fix use of SPEED_UNKNOWN ethtool constant
  ethtool: avoid signed-unsigned comparison in ethtool_validate_speed()
  net: ip6_gre: fix possible use-after-free in ip6erspan_rcv
  net: ip_gre: fix possible use-after-free in erspan_rcv
  r8169: disable ASPM again
  MAINTAINERS: ieee802154: update documentation file pattern
  net: vrf: Fix ping failed when vrf mtu is set to 0
  selftests: add a tc matchall test case
  nfc: nci: Potential off by one in ->pipes[] array
  NFC: nci: Add some bounds checking in nci_hci_cmd_received()

5 years agoMerge branch 'fixes-v5.1' of git://git.kernel.org/pub/scm/linux/kernel/git/jmorris...
Linus Torvalds [Tue, 9 Apr 2019 03:06:43 +0000 (17:06 -1000)]
Merge branch 'fixes-v5.1' of git://git./linux/kernel/git/jmorris/linux-security

Pull TPM fixes from James Morris:
 "From Jarkko: These are critical fixes for v5.1. Contains also couple
  of new selftests for v5.1 features (partial reads in /dev/tpm0)"

* 'fixes-v5.1' of git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/linux-security:
  selftests/tpm2: Open tpm dev in unbuffered mode
  selftests/tpm2: Extend tests to cover partial reads
  KEYS: trusted: fix -Wvarags warning
  tpm: Fix the type of the return value in calc_tpm2_event_size()
  KEYS: trusted: allow trusted.ko to initialize w/o a TPM
  tpm: fix an invalid condition in tpm_common_poll
  tpm: turn on TPM on suspend for TPM 1.x

5 years agoMerge tag 'xtensa-20190408' of git://github.com/jcmvbkbc/linux-xtensa
Linus Torvalds [Tue, 9 Apr 2019 03:04:42 +0000 (17:04 -1000)]
Merge tag 'xtensa-20190408' of git://github.com/jcmvbkbc/linux-xtensa

Pull xtensa fixes from Max Filippov:

 - fix syscall number passed to trace_sys_exit

 - fix syscall number initialization in start_thread

 - fix level interpretation in the return_address

 - fix format string warning in init_pmd

* tag 'xtensa-20190408' of git://github.com/jcmvbkbc/linux-xtensa:
  xtensa: fix format string warning in init_pmd
  xtensa: fix return_address
  xtensa: fix initialization of pt_regs::syscall in start_thread
  xtensa: use actual syscall number in do_syscall_trace_leave

5 years agoMerge branch 'bnxt_en-fixes'
David S. Miller [Mon, 8 Apr 2019 23:39:41 +0000 (16:39 -0700)]
Merge branch 'bnxt_en-fixes'

Michael Chan says:

====================
bnxt_en: 2 bug fixes.

The first patch prevents possible driver crash if we get a bad RX index
from the hardware.  The second patch resets the device when the hardware
reports buffer error to recover from the error.

Please queue these for -stable also.  Thanks.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agobnxt_en: Reset device on RX buffer errors.
Michael Chan [Mon, 8 Apr 2019 21:39:55 +0000 (17:39 -0400)]
bnxt_en: Reset device on RX buffer errors.

If the RX completion indicates RX buffers errors, the RX ring will be
disabled by firmware and no packets will be received on that ring from
that point on.  Recover by resetting the device.

Fixes: c0c050c58d84 ("bnxt_en: New Broadcom ethernet driver.")
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agobnxt_en: Improve RX consumer index validity check.
Michael Chan [Mon, 8 Apr 2019 21:39:54 +0000 (17:39 -0400)]
bnxt_en: Improve RX consumer index validity check.

There is logic to check that the RX/TPA consumer index is the expected
index to work around a hardware problem.  However, the potentially bad
consumer index is first used to index into an array to reference an entry.
This can potentially crash if the bad consumer index is beyond legal
range.  Improve the logic to use the consumer index for dereferencing
after the validity check and log an error message.

Fixes: fa7e28127a5a ("bnxt_en: Add workaround to detect bad opaque in rx completion (part 2)")
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonet: macb driver, check for SKBTX_HW_TSTAMP
Paul Thomas [Mon, 8 Apr 2019 19:37:54 +0000 (15:37 -0400)]
net: macb driver, check for SKBTX_HW_TSTAMP

Make sure SKBTX_HW_TSTAMP (i.e. SOF_TIMESTAMPING_TX_HARDWARE) has been
enabled for this skb. It does fix the issue where normal socks that
aren't expecting a timestamp will not wake up on select, but when a
user does want a SOF_TIMESTAMPING_TX_HARDWARE it does work.

Signed-off-by: Paul Thomas <pthomas8589@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agoMerge branch 'ethtool-fix-use-of-SPEED_UNKNOWN-constant'
David S. Miller [Mon, 8 Apr 2019 23:30:43 +0000 (16:30 -0700)]
Merge branch 'ethtool-fix-use-of-SPEED_UNKNOWN-constant'

Michael Zhivich says:

====================
ethtool: fix use of SPEED_UNKNOWN constant

This patch series addresses 2 related issues:

1. ethtool_validate_speed() triggers a "signed-unsigned comparison"
warning due to type difference of SPEED_UNKNOWN constant (int)
and argument to ethtool_validate_speed (__u32).

2. some drivers use u16 storage for SPEED_UNKNOWN constant,
resulting in value truncation and thus failure to test against
SPEED_UNKNOWN correctly.

This revised series addresses several feedback comments:
- split up the patch in to series
- do not unnecessarily change drivers that use "int" storage
  for speed values
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agoqlogic: qlcnic: fix use of SPEED_UNKNOWN ethtool constant
Michael Zhivich [Mon, 8 Apr 2019 14:48:47 +0000 (10:48 -0400)]
qlogic: qlcnic: fix use of SPEED_UNKNOWN ethtool constant

qlcnic driver uses u16 to store SPEED_UKNOWN ethtool constant,
which is defined as -1, resulting in value truncation and
thus incorrect test results against SPEED_UNKNOWN.

For example, the following test will print "False":

    u16 speed = SPEED_UNKNOWN;

    if (speed == SPEED_UNKNOWN)
        printf("True");
    else
        printf("False");

Change storage of speed to use u32 to avoid this issue.

Signed-off-by: Michael Zhivich <mzhivich@akamai.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agobroadcom: tg3: fix use of SPEED_UNKNOWN ethtool constant
Michael Zhivich [Mon, 8 Apr 2019 14:48:46 +0000 (10:48 -0400)]
broadcom: tg3: fix use of SPEED_UNKNOWN ethtool constant

tg3 driver uses u16 to store SPEED_UKNOWN ethtool constant,
which is defined as -1, resulting in value truncation and
thus incorrect test results against SPEED_UNKNOWN.

For example, the following test will print "False":

u16 speed = SPEED_UNKNOWN;

if (speed == SPEED_UNKNOWN)
    printf("True");
else
    printf("False");

Change storage of speed to use u32 to avoid this issue.

Signed-off-by: Michael Zhivich <mzhivich@akamai.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agoethtool: avoid signed-unsigned comparison in ethtool_validate_speed()
Michael Zhivich [Mon, 8 Apr 2019 14:48:45 +0000 (10:48 -0400)]
ethtool: avoid signed-unsigned comparison in ethtool_validate_speed()

When building C++ userspace code that includes ethtool.h
with "-Werror -Wall", g++ complains about signed-unsigned comparison in
ethtool_validate_speed() due to definition of SPEED_UNKNOWN as -1.

Explicitly cast SPEED_UNKNOWN to __u32 to match type of
ethtool_validate_speed() argument.

Signed-off-by: Michael Zhivich <mzhivich@akamai.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agoMerge branch 'erspan-use-after-free'
David S. Miller [Mon, 8 Apr 2019 23:16:47 +0000 (16:16 -0700)]
Merge branch 'erspan-use-after-free'

Lorenzo Bianconi says:

====================
fix possible use-after-free in erspan_v{4,6}

Similar to what I did in commit bb9bd814ebf0 ("ipv6: sit: reset ip
header pointer in ipip6_rcv"), fix possible use-after-free in
erspan_rcv and ip6erspan_rcv extracting tunnel metadata since the
packet can be 'uncloned' running __iptunnel_pull_header
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonet: ip6_gre: fix possible use-after-free in ip6erspan_rcv
Lorenzo Bianconi [Sat, 6 Apr 2019 15:16:53 +0000 (17:16 +0200)]
net: ip6_gre: fix possible use-after-free in ip6erspan_rcv

erspan_v6 tunnels run __iptunnel_pull_header on received skbs to remove
erspan header. This can determine a possible use-after-free accessing
pkt_md pointer in ip6erspan_rcv since the packet will be 'uncloned'
running pskb_expand_head if it is a cloned gso skb (e.g if the packet has
been sent though a veth device). Fix it resetting pkt_md pointer after
__iptunnel_pull_header

Fixes: 1d7e2ed22f8d ("net: erspan: refactor existing erspan code")
Signed-off-by: Lorenzo Bianconi <lorenzo.bianconi@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agonet: ip_gre: fix possible use-after-free in erspan_rcv
Lorenzo Bianconi [Sat, 6 Apr 2019 15:16:52 +0000 (17:16 +0200)]
net: ip_gre: fix possible use-after-free in erspan_rcv

erspan tunnels run __iptunnel_pull_header on received skbs to remove
gre and erspan headers. This can determine a possible use-after-free
accessing pkt_md pointer in erspan_rcv since the packet will be 'uncloned'
running pskb_expand_head if it is a cloned gso skb (e.g if the packet has
been sent though a veth device). Fix it resetting pkt_md pointer after
__iptunnel_pull_header

Fixes: 1d7e2ed22f8d ("net: erspan: refactor existing erspan code")
Signed-off-by: Lorenzo Bianconi <lorenzo.bianconi@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5 years agoselftests/tpm2: Open tpm dev in unbuffered mode
Tadeusz Struk [Tue, 12 Feb 2019 23:42:05 +0000 (15:42 -0800)]
selftests/tpm2: Open tpm dev in unbuffered mode

In order to have control over how many bytes are read or written
the device needs to be opened in unbuffered mode.

Signed-off-by: Tadeusz Struk <tadeusz.struk@intel.com>
Reviewed-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Tested-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Signed-off-by: James Morris <james.morris@microsoft.com>