mm: slub: add missing TID bump in kmem_cache_alloc_bulk()
authorJann Horn <jannh@google.com>
Tue, 17 Mar 2020 00:28:45 +0000 (01:28 +0100)
committerLinus Torvalds <torvalds@linux-foundation.org>
Wed, 18 Mar 2020 16:21:51 +0000 (09:21 -0700)
When kmem_cache_alloc_bulk() attempts to allocate N objects from a percpu
freelist of length M, and N > M > 0, it will first remove the M elements
from the percpu freelist, then call ___slab_alloc() to allocate the next
element and repopulate the percpu freelist. ___slab_alloc() can re-enable
IRQs via allocate_slab(), so the TID must be bumped before ___slab_alloc()
to properly commit the freelist head change.

Fix it by unconditionally bumping c->tid when entering the slowpath.

Cc: stable@vger.kernel.org
Fixes: ebe909e0fdb3 ("slub: improve bulk alloc strategy")
Signed-off-by: Jann Horn <jannh@google.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
mm/slub.c

index 17dc00e..eae5bb4 100644 (file)
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3175,6 +3175,15 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size,
 
                if (unlikely(!object)) {
                        /*
+                        * We may have removed an object from c->freelist using
+                        * the fastpath in the previous iteration; in that case,
+                        * c->tid has not been bumped yet.
+                        * Since ___slab_alloc() may reenable interrupts while
+                        * allocating memory, we should bump c->tid now.
+                        */
+                       c->tid = next_tid(c->tid);
+
+                       /*
                         * Invoking slow path likely have side-effect
                         * of re-populating per CPU c->freelist
                         */