From: Jens Axboe Date: Mon, 29 Jan 2024 18:57:11 +0000 (-0700) Subject: io_uring/poll: add requeue return code from poll multishot handling X-Git-Tag: v6.6.17~6 X-Git-Url: http://review.tizen.org/git/?a=commitdiff_plain;h=7cbd3aa59db5baa761fb7c401abe1f379e5115a0;p=platform%2Fkernel%2Flinux-rpi.git io_uring/poll: add requeue return code from poll multishot handling Commit 704ea888d646cb9d715662944cf389c823252ee0 upstream. Since our poll handling is edge triggered, multishot handlers retry internally until they know that no more data is available. In preparation for limiting these retries, add an internal return code, IOU_REQUEUE, which can be used to inform the poll backend about the handler wanting to retry, but that this should happen through a normal task_work requeue rather than keep hammering on the issue side for this one request. No functional changes in this patch, nobody is using this return code just yet. Signed-off-by: Jens Axboe Signed-off-by: Greg Kroah-Hartman --- diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h index d2bad1d..c8cba7831 100644 --- a/io_uring/io_uring.h +++ b/io_uring/io_uring.h @@ -31,6 +31,13 @@ enum { IOU_ISSUE_SKIP_COMPLETE = -EIOCBQUEUED, /* + * Requeue the task_work to restart operations on this request. The + * actual value isn't important, should just be not an otherwise + * valid error code, yet less than -MAX_ERRNO and valid internally. + */ + IOU_REQUEUE = -3072, + + /* * Intended only when both IO_URING_F_MULTISHOT is passed * to indicate to the poll runner that multishot should be * removed and the result is set on req->cqe.res. diff --git a/io_uring/poll.c b/io_uring/poll.c index b0bc785..48ca081 100644 --- a/io_uring/poll.c +++ b/io_uring/poll.c @@ -226,6 +226,7 @@ enum { IOU_POLL_NO_ACTION = 1, IOU_POLL_REMOVE_POLL_USE_RES = 2, IOU_POLL_REISSUE = 3, + IOU_POLL_REQUEUE = 4, }; static void __io_poll_execute(struct io_kiocb *req, int mask) @@ -324,6 +325,8 @@ static int io_poll_check_events(struct io_kiocb *req, struct io_tw_state *ts) int ret = io_poll_issue(req, ts); if (ret == IOU_STOP_MULTISHOT) return IOU_POLL_REMOVE_POLL_USE_RES; + else if (ret == IOU_REQUEUE) + return IOU_POLL_REQUEUE; if (ret < 0) return ret; } @@ -346,8 +349,12 @@ void io_poll_task_func(struct io_kiocb *req, struct io_tw_state *ts) int ret; ret = io_poll_check_events(req, ts); - if (ret == IOU_POLL_NO_ACTION) + if (ret == IOU_POLL_NO_ACTION) { return; + } else if (ret == IOU_POLL_REQUEUE) { + __io_poll_execute(req, 0); + return; + } io_poll_remove_entries(req); io_poll_tw_hash_eject(req, ts);