From: Peter Zijlstra Date: Thu, 8 Dec 2016 15:42:14 +0000 (+0100) Subject: x86/paravirt: Fix native_patch() X-Git-Tag: v4.14-rc1~1963^2~1 X-Git-Url: http://review.tizen.org/git/?a=commitdiff_plain;h=45dbea5f55c05980cbb4c30047c71a820cd3f282;p=platform%2Fkernel%2Flinux-rpi3.git x86/paravirt: Fix native_patch() While chasing a regression I noticed we potentially patch the wrong code in native_patch(). If we do not select the native code sequence, we must use the default patcher, not fall-through the switch case. Signed-off-by: Peter Zijlstra (Intel) Cc: Alok Kataria Cc: Borislav Petkov Cc: Chris Wright Cc: Jeremy Fitzhardinge Cc: Linus Torvalds Cc: Pan Xinhui Cc: Paolo Bonzini Cc: Peter Anvin Cc: Peter Zijlstra Cc: Rusty Russell Cc: Thomas Gleixner Cc: kernel test robot Fixes: 3cded4179481 ("x86/paravirt: Optimize native pv_lock_ops.vcpu_is_preempted()") Link: http://lkml.kernel.org/r/20161208154349.270616999@infradead.org Signed-off-by: Ingo Molnar --- diff --git a/arch/x86/kernel/paravirt_patch_32.c b/arch/x86/kernel/paravirt_patch_32.c index ff03dbd..33cdec2 100644 --- a/arch/x86/kernel/paravirt_patch_32.c +++ b/arch/x86/kernel/paravirt_patch_32.c @@ -58,15 +58,19 @@ unsigned native_patch(u8 type, u16 clobbers, void *ibuf, end = end_pv_lock_ops_queued_spin_unlock; goto patch_site; } + goto patch_default; + case PARAVIRT_PATCH(pv_lock_ops.vcpu_is_preempted): if (pv_is_native_vcpu_is_preempted()) { start = start_pv_lock_ops_vcpu_is_preempted; end = end_pv_lock_ops_vcpu_is_preempted; goto patch_site; } + goto patch_default; #endif default: +patch_default: ret = paravirt_patch_default(type, clobbers, ibuf, addr, len); break; diff --git a/arch/x86/kernel/paravirt_patch_64.c b/arch/x86/kernel/paravirt_patch_64.c index e61dd979..b0fceff 100644 --- a/arch/x86/kernel/paravirt_patch_64.c +++ b/arch/x86/kernel/paravirt_patch_64.c @@ -70,15 +70,19 @@ unsigned native_patch(u8 type, u16 clobbers, void *ibuf, end = end_pv_lock_ops_queued_spin_unlock; goto patch_site; } + goto patch_default; + case PARAVIRT_PATCH(pv_lock_ops.vcpu_is_preempted): if (pv_is_native_vcpu_is_preempted()) { start = start_pv_lock_ops_vcpu_is_preempted; end = end_pv_lock_ops_vcpu_is_preempted; goto patch_site; } + goto patch_default; #endif default: +patch_default: ret = paravirt_patch_default(type, clobbers, ibuf, addr, len); break;