io_uring: kill poll linking optimisation
authorPavel Begunkov <asml.silence@gmail.com>
Mon, 29 Aug 2022 13:30:16 +0000 (14:30 +0100)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Mon, 5 Sep 2022 08:30:05 +0000 (10:30 +0200)
[ upstream commmit ab1dab960b8352cee082db0f8a54dc92a948bfd7 ]

With IORING_FEAT_FAST_POLL in place, io_put_req_find_next() for poll
requests doesn't make much sense, and in any case re-adding it
shouldn't be a problem considering batching in tctx_task_work(). We can
remove it.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/15699682bf81610ec901d4e79d6da64baa9f70be.1639605189.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
[pavel: backport]
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
fs/io_uring.c

index 0221056..2b33924 100644 (file)
@@ -5460,7 +5460,6 @@ static inline bool io_poll_complete(struct io_kiocb *req, __poll_t mask)
 static void io_poll_task_func(struct io_kiocb *req, bool *locked)
 {
        struct io_ring_ctx *ctx = req->ctx;
-       struct io_kiocb *nxt;
 
        if (io_poll_rewait(req, &req->poll)) {
                spin_unlock(&ctx->completion_lock);
@@ -5484,11 +5483,8 @@ static void io_poll_task_func(struct io_kiocb *req, bool *locked)
                spin_unlock(&ctx->completion_lock);
                io_cqring_ev_posted(ctx);
 
-               if (done) {
-                       nxt = io_put_req_find_next(req);
-                       if (nxt)
-                               io_req_task_submit(nxt, locked);
-               }
+               if (done)
+                       io_put_req(req);
        }
 }