Fernando found we hit the regular OFF_SLAB 'recursion' before we
annotate the locks, cure this.
The relevant portion of the stack-trace:
> [ 0.000000] [<
c085e24f>] rt_spin_lock+0x50/0x56
> [ 0.000000] [<
c04fb406>] __cache_free+0x43/0xc3
> [ 0.000000] [<
c04fb23f>] kmem_cache_free+0x6c/0xdc
> [ 0.000000] [<
c04fb2fe>] slab_destroy+0x4f/0x53
> [ 0.000000] [<
c04fb396>] free_block+0x94/0xc1
> [ 0.000000] [<
c04fc551>] do_tune_cpucache+0x10b/0x2bb
> [ 0.000000] [<
c04fc8dc>] enable_cpucache+0x7b/0xa7
> [ 0.000000] [<
c0bd9d3c>] kmem_cache_init_late+0x1f/0x61
> [ 0.000000] [<
c0bba687>] start_kernel+0x24c/0x363
> [ 0.000000] [<
c0bba0ba>] i386_start_kernel+0xa9/0xaf
Reported-by: Fernando Lopez-Lezcano <nando@ccrma.Stanford.EDU>
Acked-by: Pekka Enberg <penberg@kernel.org>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1311888176.2617.379.camel@laptop
Signed-off-by: Ingo Molnar <mingo@elte.hu>
{
struct kmem_cache *cachep;
+ /* Annotate slab for lockdep -- annotate the malloc caches */
+ init_lock_keys();
+
/* 6) resize the head arrays to their final sizes */
mutex_lock(&cache_chain_mutex);
list_for_each_entry(cachep, &cache_chain, next)
/* Done! */
g_cpucache_up = FULL;
- /* Annotate slab for lockdep -- annotate the malloc caches */
- init_lock_keys();
-
/*
* Register a cpu startup notifier callback that initializes
* cpu_cache_get for all new cpus