On -rt we were seeing spurious bad page states like:
Bad page state in process 'firefox'
page:
c1bc2380 flags:0x40000000 mapping:
c1bc2390 mapcount:0 count:0
Trying to fix it up, but a reboot is needed
Backtrace:
Pid: 503, comm: firefox Not tainted 2.6.26.8-rt13 #3
[<
c043d0f3>] ? printk+0x14/0x19
[<
c0272d4e>] bad_page+0x4e/0x79
[<
c0273831>] free_hot_cold_page+0x5b/0x1d3
[<
c02739f6>] free_hot_page+0xf/0x11
[<
c0273a18>] __free_pages+0x20/0x2b
[<
c027d170>] __pte_alloc+0x87/0x91
[<
c027d25e>] handle_mm_fault+0xe4/0x733
[<
c043f680>] ? rt_mutex_down_read_trylock+0x57/0x63
[<
c043f680>] ? rt_mutex_down_read_trylock+0x57/0x63
[<
c0218875>] do_page_fault+0x36f/0x88a
This is the case where a concurrent fault already installed the PTE and
we get to free the newly allocated one.
This is due to pgtable_page_ctor() doing the spin_lock_init(&page->ptl)
which is overlaid with the {private, mapping} struct.
union {
struct {
unsigned long private;
struct address_space *mapping;
};
spinlock_t ptl;
struct kmem_cache *slab;
struct page *first_page;
};
Normally the spinlock is small enough to not stomp on page->mapping, but
PREEMPT_RT=y has huge 'spin'locks.
But lockdep kernels should also be able to trigger this splat, as the
lock tracking code grows the spinlock to cover page->mapping.
The obvious fix is calling pgtable_page_dtor() like the regular pte free
path __pte_free_tlb() does.
It seems all architectures except x86 and nm10300 already do this, and
nm10300 doesn't seem to use pgtable_page_ctor(), which suggests it
doesn't do SMP or simply doesnt do MMU at all or something.
Signed-off-by: Peter Zijlstra <a.p.zijlsta@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Cc: <stable@kernel.org>