io_uring: limit the number of cancellation buckets
authorPavel Begunkov <asml.silence@gmail.com>
Thu, 16 Jun 2022 09:22:05 +0000 (10:22 +0100)
committerJens Axboe <axboe@kernel.dk>
Mon, 25 Jul 2022 00:39:13 +0000 (18:39 -0600)
Don't allocate to many hash/cancellation buckets, there might be too
many, clamp it to 8 bits, or 256 * 64B = 16KB. We don't usually have too
many requests, and 256 buckets should be enough, especially since we
do hash search only in the cancellation path.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/b9620c8072ba61a2d50eba894b89bd93a94a9abd.1655371007.git.asml.silence@gmail.com
Reviewed-by: Hao Xu <howeyxu@tencent.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
io_uring/io_uring.c

index ac6946e..aafdf13 100644 (file)
@@ -254,12 +254,12 @@ static __cold struct io_ring_ctx *io_ring_ctx_alloc(struct io_uring_params *p)
 
        /*
         * Use 5 bits less than the max cq entries, that should give us around
-        * 32 entries per hash list if totally full and uniformly spread.
+        * 32 entries per hash list if totally full and uniformly spread, but
+        * don't keep too many buckets to not overconsume memory.
         */
-       hash_bits = ilog2(p->cq_entries);
-       hash_bits -= 5;
-       if (hash_bits <= 0)
-               hash_bits = 1;
+       hash_bits = ilog2(p->cq_entries) - 5;
+       hash_bits = clamp(hash_bits, 1, 8);
+
        ctx->cancel_hash_bits = hash_bits;
        ctx->cancel_hash =
                kmalloc((1U << hash_bits) * sizeof(struct io_hash_bucket),