io_uring: use io_cq_lock consistently
authorPavel Begunkov <asml.silence@gmail.com>
Thu, 8 Sep 2022 12:20:28 +0000 (13:20 +0100)
committerJens Axboe <axboe@kernel.dk>
Wed, 21 Sep 2022 16:30:43 +0000 (10:30 -0600)
There is one place when we forgot to change hand coded spin locking with
io_cq_lock(), change it to be more consistent. Note, the unlock part is
already __io_cq_unlock_post().

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/91699b9a00a07128f7ca66136bdbbfc67a64659e.1662639236.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
io_uring/io_uring.c

index 339bc19..b5245c5 100644 (file)
@@ -1327,7 +1327,7 @@ static void __io_submit_flush_completions(struct io_ring_ctx *ctx)
        struct io_wq_work_node *node, *prev;
        struct io_submit_state *state = &ctx->submit_state;
 
-       spin_lock(&ctx->completion_lock);
+       io_cq_lock(ctx);
        wq_list_for_each(node, prev, &state->compl_reqs) {
                struct io_kiocb *req = container_of(node, struct io_kiocb,
                                            comp_list);