io_uring: make POLL_ADD/POLL_REMOVE scale better
authorJens Axboe <axboe@kernel.dk>
Thu, 14 Nov 2019 19:09:58 +0000 (12:09 -0700)
committerJens Axboe <axboe@kernel.dk>
Thu, 14 Nov 2019 19:09:58 +0000 (12:09 -0700)
commiteac406c61cd0ec8fe7970ca46ddf23e40a86b579
tree89f5835a2a93c1a44da3a3679ca16b4954872f12
parent021d1cdda3875bf35edac9133335f622d7910abc
io_uring: make POLL_ADD/POLL_REMOVE scale better

One of the obvious use cases for these commands is networking, where
it's not uncommon to have tons of sockets open and polled for. The
current implementation uses a list for insertion and lookup, which works
fine for file based use cases where the count is usually low, it breaks
down somewhat for higher number of files / sockets. A test case with
30k sockets being polled for and cancelled takes:

real    0m6.968s
user    0m0.002s
sys     0m6.936s

with the patch it takes:

real    0m0.233s
user    0m0.010s
sys     0m0.176s

If you go to 50k sockets, it gets even more abysmal with the current
code:

real    0m40.602s
user    0m0.010s
sys     0m40.555s

with the patch it takes:

real    0m0.398s
user    0m0.000s
sys     0m0.341s

Change is pretty straight forward, just replace the cancel_list with
a red/black tree instead.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
fs/io_uring.c