slub: tidy up initialization ordering
authorAlexander Potapenko <glider@google.com>
Wed, 6 Sep 2017 23:19:15 +0000 (16:19 -0700)
committerLinus Torvalds <torvalds@linux-foundation.org>
Thu, 7 Sep 2017 00:27:24 +0000 (17:27 -0700)
 - free_kmem_cache_nodes() frees the cache node before nulling out a
   reference to it

 - init_kmem_cache_nodes() publishes the cache node before initializing
   it

Neither of these matter at runtime because the cache nodes cannot be
looked up by any other thread.  But it's neater and more consistent to
reorder these.

Link: http://lkml.kernel.org/r/20170707083408.40410-1-glider@google.com
Signed-off-by: Alexander Potapenko <glider@google.com>
Acked-by: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
mm/slub.c

index e8b4e31..3e90d79 100644 (file)
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3358,8 +3358,8 @@ static void free_kmem_cache_nodes(struct kmem_cache *s)
        struct kmem_cache_node *n;
 
        for_each_kmem_cache_node(s, node, n) {
-               kmem_cache_free(kmem_cache_node, n);
                s->node[node] = NULL;
+               kmem_cache_free(kmem_cache_node, n);
        }
 }
 
@@ -3389,8 +3389,8 @@ static int init_kmem_cache_nodes(struct kmem_cache *s)
                        return 0;
                }
 
-               s->node[node] = n;
                init_kmem_cache_node(n);
+               s->node[node] = n;
        }
        return 1;
 }