mm: count time in drain_all_pages during direct reclaim as memory pressure
authorSuren Baghdasaryan <surenb@google.com>
Tue, 22 Mar 2022 21:43:54 +0000 (14:43 -0700)
committerLinus Torvalds <torvalds@linux-foundation.org>
Tue, 22 Mar 2022 22:57:06 +0000 (15:57 -0700)
When page allocation in direct reclaim path fails, the system will make
one attempt to shrink per-cpu page lists and free pages from high alloc
reserves.  Draining per-cpu pages into buddy allocator can be a very
slow operation because it's done using workqueues and the task in direct
reclaim waits for all of them to finish before proceeding.  Currently
this time is not accounted as psi memory stall.

While testing mobile devices under extreme memory pressure, when
allocations are failing during direct reclaim, we notices that psi
events which would be expected in such conditions were not triggered.
After profiling these cases it was determined that the reason for
missing psi events was that a big chunk of time spent in direct reclaim
is not accounted as memory stall, therefore psi would not reach the
levels at which an event is generated.  Further investigation revealed
that the bulk of that unaccounted time was spent inside drain_all_pages
call.

A typical captured case when drain_all_pages path gets activated:

__alloc_pages_slowpath  took 44.644.613ns
    __perform_reclaim   took    751.668ns (1.7%)
    drain_all_pages     took 43.887.167ns (98.3%)

PSI in this case records the time spent in __perform_reclaim but ignores
drain_all_pages, IOW it misses 98.3% of the time spent in
__alloc_pages_slowpath.

Annotate __alloc_pages_direct_reclaim in its entirety so that delays
from handling page allocation failure in the direct reclaim path are
accounted as memory stall.

Link: https://lkml.kernel.org/r/20220223194812.1299646-1-surenb@google.com
Signed-off-by: Suren Baghdasaryan <surenb@google.com>
Reported-by: Tim Murray <timmurray@google.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: Michal Hocko <mhocko@suse.com>
Reviewed-by: Shakeel Butt <shakeelb@google.com>
Cc: Petr Mladek <pmladek@suse.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Roman Gushchin <roman.gushchin@linux.dev>
Cc: Minchan Kim <minchan@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
mm/page_alloc.c

index 7d24522..30d35f2 100644 (file)
@@ -4554,13 +4554,12 @@ __perform_reclaim(gfp_t gfp_mask, unsigned int order,
                                        const struct alloc_context *ac)
 {
        unsigned int noreclaim_flag;
-       unsigned long pflags, progress;
+       unsigned long progress;
 
        cond_resched();
 
        /* We now go into synchronous reclaim */
        cpuset_memory_pressure_bump();
-       psi_memstall_enter(&pflags);
        fs_reclaim_acquire(gfp_mask);
        noreclaim_flag = memalloc_noreclaim_save();
 
@@ -4569,7 +4568,6 @@ __perform_reclaim(gfp_t gfp_mask, unsigned int order,
 
        memalloc_noreclaim_restore(noreclaim_flag);
        fs_reclaim_release(gfp_mask);
-       psi_memstall_leave(&pflags);
 
        cond_resched();
 
@@ -4583,11 +4581,13 @@ __alloc_pages_direct_reclaim(gfp_t gfp_mask, unsigned int order,
                unsigned long *did_some_progress)
 {
        struct page *page = NULL;
+       unsigned long pflags;
        bool drained = false;
 
+       psi_memstall_enter(&pflags);
        *did_some_progress = __perform_reclaim(gfp_mask, order, ac);
        if (unlikely(!(*did_some_progress)))
-               return NULL;
+               goto out;
 
 retry:
        page = get_page_from_freelist(gfp_mask, order, alloc_flags, ac);
@@ -4603,6 +4603,8 @@ retry:
                drained = true;
                goto retry;
        }
+out:
+       psi_memstall_leave(&pflags);
 
        return page;
 }