KVM: PPC: Book3S HV: Check caller of H_SVM_* Hcalls
authorLaurent Dufour <ldufour@linux.ibm.com>
Fri, 20 Mar 2020 10:26:42 +0000 (11:26 +0100)
committerPaul Mackerras <paulus@ozlabs.org>
Tue, 24 Mar 2020 02:08:51 +0000 (13:08 +1100)
The Hcall named H_SVM_* are reserved to the Ultravisor. However, nothing
prevent a malicious VM or SVM to call them. This could lead to weird result
and should be filtered out.

Checking the Secure bit of the calling MSR ensure that the call is coming
from either the Ultravisor or a SVM. But any system call made from a SVM
are going through the Ultravisor, and the Ultravisor should filter out
these malicious call. This way, only the Ultravisor is able to make such a
Hcall.

Cc: Bharata B Rao <bharata@linux.ibm.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Laurent Dufour <ldufour@linux.ibm.com>
Reviewed-by: Ram Pai <linuxram@us.ibnm.com>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
arch/powerpc/kvm/book3s_hv.c

index 85e75b1..a308de6 100644 (file)
@@ -1073,25 +1073,35 @@ int kvmppc_pseries_do_hcall(struct kvm_vcpu *vcpu)
                                         kvmppc_get_gpr(vcpu, 6));
                break;
        case H_SVM_PAGE_IN:
-               ret = kvmppc_h_svm_page_in(vcpu->kvm,
-                                          kvmppc_get_gpr(vcpu, 4),
-                                          kvmppc_get_gpr(vcpu, 5),
-                                          kvmppc_get_gpr(vcpu, 6));
+               ret = H_UNSUPPORTED;
+               if (kvmppc_get_srr1(vcpu) & MSR_S)
+                       ret = kvmppc_h_svm_page_in(vcpu->kvm,
+                                                  kvmppc_get_gpr(vcpu, 4),
+                                                  kvmppc_get_gpr(vcpu, 5),
+                                                  kvmppc_get_gpr(vcpu, 6));
                break;
        case H_SVM_PAGE_OUT:
-               ret = kvmppc_h_svm_page_out(vcpu->kvm,
-                                           kvmppc_get_gpr(vcpu, 4),
-                                           kvmppc_get_gpr(vcpu, 5),
-                                           kvmppc_get_gpr(vcpu, 6));
+               ret = H_UNSUPPORTED;
+               if (kvmppc_get_srr1(vcpu) & MSR_S)
+                       ret = kvmppc_h_svm_page_out(vcpu->kvm,
+                                                   kvmppc_get_gpr(vcpu, 4),
+                                                   kvmppc_get_gpr(vcpu, 5),
+                                                   kvmppc_get_gpr(vcpu, 6));
                break;
        case H_SVM_INIT_START:
-               ret = kvmppc_h_svm_init_start(vcpu->kvm);
+               ret = H_UNSUPPORTED;
+               if (kvmppc_get_srr1(vcpu) & MSR_S)
+                       ret = kvmppc_h_svm_init_start(vcpu->kvm);
                break;
        case H_SVM_INIT_DONE:
-               ret = kvmppc_h_svm_init_done(vcpu->kvm);
+               ret = H_UNSUPPORTED;
+               if (kvmppc_get_srr1(vcpu) & MSR_S)
+                       ret = kvmppc_h_svm_init_done(vcpu->kvm);
                break;
        case H_SVM_INIT_ABORT:
-               ret = kvmppc_h_svm_init_abort(vcpu->kvm);
+               ret = H_UNSUPPORTED;
+               if (kvmppc_get_srr1(vcpu) & MSR_S)
+                       ret = kvmppc_h_svm_init_abort(vcpu->kvm);
                break;
 
        default: