xfs: explicitly specify cpu when forcing inodegc delayed work to run immediately
authorDarrick J. Wong <djwong@kernel.org>
Mon, 1 May 2023 23:16:05 +0000 (09:16 +1000)
committerDave Chinner <dchinner@redhat.com>
Mon, 1 May 2023 23:16:05 +0000 (09:16 +1000)
I've been noticing odd racing behavior in the inodegc code that could
only be explained by one cpu adding an inode to its inactivation llist
at the same time that another cpu is processing that cpu's llist.
Preemption is disabled between get/put_cpu_ptr, so the only explanation
is scheduler mayhem.  I inserted the following debug code into
xfs_inodegc_worker (see the next patch):

ASSERT(gc->cpu == smp_processor_id());

This assertion tripped during overnight tests on the arm64 machines, but
curiously not on x86_64.  I think we haven't observed any resource leaks
here because the lockfree list code can handle simultaneous llist_add
and llist_del_all functions operating on the same list.  However, the
whole point of having percpu inodegc lists is to take advantage of warm
memory caches by inactivating inodes on the last processor to touch the
inode.

The incorrect scheduling seems to occur after an inodegc worker is
subjected to mod_delayed_work().  This wraps mod_delayed_work_on with
WORK_CPU_UNBOUND specified as the cpu number.  Unbound allows for
scheduling on any cpu, not necessarily the same one that scheduled the
work.

Because preemption is disabled for as long as we have the gc pointer, I
think it's safe to use current_cpu() (aka smp_processor_id) to queue the
delayed work item on the correct cpu.

Fixes: 7cf2b0f9611b ("xfs: bound maximum wait time for inodegc work")
Signed-off-by: Darrick J. Wong <djwong@kernel.org>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Signed-off-by: Dave Chinner <david@fromorbit.com>
fs/xfs/xfs_icache.c

index 351849f..5871211 100644 (file)
@@ -2069,7 +2069,8 @@ xfs_inodegc_queue(
                queue_delay = 0;
 
        trace_xfs_inodegc_queue(mp, __return_address);
-       mod_delayed_work(mp->m_inodegc_wq, &gc->work, queue_delay);
+       mod_delayed_work_on(current_cpu(), mp->m_inodegc_wq, &gc->work,
+                       queue_delay);
        put_cpu_ptr(gc);
 
        if (xfs_inodegc_want_flush_work(ip, items, shrinker_hits)) {
@@ -2113,7 +2114,8 @@ xfs_inodegc_cpu_dead(
 
        if (xfs_is_inodegc_enabled(mp)) {
                trace_xfs_inodegc_queue(mp, __return_address);
-               mod_delayed_work(mp->m_inodegc_wq, &gc->work, 0);
+               mod_delayed_work_on(current_cpu(), mp->m_inodegc_wq, &gc->work,
+                               0);
        }
        put_cpu_ptr(gc);
 }