io_uring: optimise rsrc referencing
authorPavel Begunkov <asml.silence@gmail.com>
Sat, 9 Oct 2021 22:14:41 +0000 (23:14 +0100)
committerJens Axboe <axboe@kernel.dk>
Tue, 19 Oct 2021 11:49:55 +0000 (05:49 -0600)
commitab409402478462b5da007bfc46d165587c3adfc3
tree0f32078a415091abc25f0c091aa902f9e97acde8
parenta46be971edb69fe4b1dcc4359c3ddf9127629dab
io_uring: optimise rsrc referencing

Apparently, percpu_ref_put/get() are expensive enough if done per
request, get them in a batch and cache on the submission side to avoid
getting it over and over again. Also, if we're completing under
uring_lock, return refs back into the cache instead of
perfcpu_ref_put(). Pretty similar to how we do tctx->cached_refs
accounting, but fall back to normal putting when we already changed a
rsrc node by the time of free.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/b40d8c5bc77d3c9550df8a319117a374ac85f8f4.1633817310.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
fs/io_uring.c