From: Dae R. Jeong Date: Mon, 27 Mar 2023 12:01:53 +0000 (+0900) Subject: vmci_host: fix a race condition in vmci_host_poll() causing GPF X-Git-Tag: v6.6.7~2981^2~76 X-Git-Url: http://review.tizen.org/git/?a=commitdiff_plain;h=ae13381da5ff0e8e084c0323c3cc0a945e43e9c7;p=platform%2Fkernel%2Flinux-starfive.git vmci_host: fix a race condition in vmci_host_poll() causing GPF During fuzzing, a general protection fault is observed in vmci_host_poll(). general protection fault, probably for non-canonical address 0xdffffc0000000019: 0000 [#1] PREEMPT SMP KASAN KASAN: null-ptr-deref in range [0x00000000000000c8-0x00000000000000cf] RIP: 0010:__lock_acquire+0xf3/0x5e00 kernel/locking/lockdep.c:4926 <- omitting registers -> Call Trace: lock_acquire+0x1a4/0x4a0 kernel/locking/lockdep.c:5672 __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline] _raw_spin_lock_irqsave+0xb3/0x100 kernel/locking/spinlock.c:162 add_wait_queue+0x3d/0x260 kernel/sched/wait.c:22 poll_wait include/linux/poll.h:49 [inline] vmci_host_poll+0xf8/0x2b0 drivers/misc/vmw_vmci/vmci_host.c:174 vfs_poll include/linux/poll.h:88 [inline] do_pollfd fs/select.c:873 [inline] do_poll fs/select.c:921 [inline] do_sys_poll+0xc7c/0x1aa0 fs/select.c:1015 __do_sys_ppoll fs/select.c:1121 [inline] __se_sys_ppoll+0x2cc/0x330 fs/select.c:1101 do_syscall_x64 arch/x86/entry/common.c:51 [inline] do_syscall_64+0x4e/0xa0 arch/x86/entry/common.c:82 entry_SYSCALL_64_after_hwframe+0x46/0xb0 Example thread interleaving that causes the general protection fault is as follows: CPU1 (vmci_host_poll) CPU2 (vmci_host_do_init_context) ----- ----- // Read uninitialized context context = vmci_host_dev->context; // Initialize context vmci_host_dev->context = vmci_ctx_create(); vmci_host_dev->ct_type = VMCIOBJ_CONTEXT; if (vmci_host_dev->ct_type == VMCIOBJ_CONTEXT) { // Dereferencing the wrong pointer poll_wait(..., &context->host_context); } In this scenario, vmci_host_poll() reads vmci_host_dev->context first, and then reads vmci_host_dev->ct_type to check that vmci_host_dev->context is initialized. However, since these two reads are not atomically executed, there is a chance of a race condition as described above. To fix this race condition, read vmci_host_dev->context after checking the value of vmci_host_dev->ct_type so that vmci_host_poll() always reads an initialized context. Reported-by: Dae R. Jeong Fixes: 8bf503991f87 ("VMCI: host side driver implementation.") Signed-off-by: Dae R. Jeong Link: https://lore.kernel.org/r/ZCGFsdBAU4cYww5l@dragonet Signed-off-by: Greg Kroah-Hartman --- diff --git a/drivers/misc/vmw_vmci/vmci_host.c b/drivers/misc/vmw_vmci/vmci_host.c index 857b985..abe79f6 100644 --- a/drivers/misc/vmw_vmci/vmci_host.c +++ b/drivers/misc/vmw_vmci/vmci_host.c @@ -165,10 +165,16 @@ static int vmci_host_close(struct inode *inode, struct file *filp) static __poll_t vmci_host_poll(struct file *filp, poll_table *wait) { struct vmci_host_dev *vmci_host_dev = filp->private_data; - struct vmci_ctx *context = vmci_host_dev->context; + struct vmci_ctx *context; __poll_t mask = 0; if (vmci_host_dev->ct_type == VMCIOBJ_CONTEXT) { + /* + * Read context only if ct_type == VMCIOBJ_CONTEXT to make + * sure that context is initialized + */ + context = vmci_host_dev->context; + /* Check for VMCI calls to this VM context. */ if (wait) poll_wait(filp, &context->host_context.wait_queue,