io_uring: fix unprotected iopoll overflow
authorPavel Begunkov <asml.silence@gmail.com>
Thu, 7 Sep 2023 12:50:08 +0000 (13:50 +0100)
committerJens Axboe <axboe@kernel.dk>
Thu, 7 Sep 2023 15:02:29 +0000 (09:02 -0600)
[   71.490669] WARNING: CPU: 3 PID: 17070 at io_uring/io_uring.c:769
io_cqring_event_overflow+0x47b/0x6b0
[   71.498381] Call Trace:
[   71.498590]  <TASK>
[   71.501858]  io_req_cqe_overflow+0x105/0x1e0
[   71.502194]  __io_submit_flush_completions+0x9f9/0x1090
[   71.503537]  io_submit_sqes+0xebd/0x1f00
[   71.503879]  __do_sys_io_uring_enter+0x8c5/0x2380
[   71.507360]  do_syscall_64+0x39/0x80

We decoupled CQ locking from ->task_complete but haven't fixed up places
forcing locking for CQ overflows.

Fixes: ec26c225f06f5 ("io_uring: merge iopoll and normal completion paths")
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
io_uring/io_uring.c

index 58d8dd3..090913a 100644 (file)
@@ -908,7 +908,7 @@ static void __io_flush_post_cqes(struct io_ring_ctx *ctx)
                struct io_uring_cqe *cqe = &ctx->completion_cqes[i];
 
                if (!io_fill_cqe_aux(ctx, cqe->user_data, cqe->res, cqe->flags)) {
-                       if (ctx->task_complete) {
+                       if (ctx->lockless_cq) {
                                spin_lock(&ctx->completion_lock);
                                io_cqring_event_overflow(ctx, cqe->user_data,
                                                        cqe->res, cqe->flags, 0, 0);
@@ -1566,7 +1566,7 @@ void __io_submit_flush_completions(struct io_ring_ctx *ctx)
 
                if (!(req->flags & REQ_F_CQE_SKIP) &&
                    unlikely(!io_fill_cqe_req(ctx, req))) {
-                       if (ctx->task_complete) {
+                       if (ctx->lockless_cq) {
                                spin_lock(&ctx->completion_lock);
                                io_req_cqe_overflow(req);
                                spin_unlock(&ctx->completion_lock);