mm/page_alloc: always remove pages from temporary list
authorMel Gorman <mgorman@techsingularity.net>
Fri, 18 Nov 2022 10:17:13 +0000 (10:17 +0000)
committerAndrew Morton <akpm@linux-foundation.org>
Wed, 30 Nov 2022 23:59:01 +0000 (15:59 -0800)
Patch series "Leave IRQs enabled for per-cpu page allocations", v3.

This patch (of 2):

free_unref_page_list() has neglected to remove pages properly from the
list of pages to free since forever.  It works by coincidence because
list_add happened to do the right thing adding the pages to just the PCP
lists.  However, a later patch added pages to either the PCP list or the
zone list but only properly deleted the page from the list in one path
leading to list corruption and a subsequent failure.  As a preparation
patch, always delete the pages from one list properly before adding to
another.  On its own, this fixes nothing although it adds a fractional
amount of overhead but is critical to the next patch.

Link: https://lkml.kernel.org/r/20221118101714.19590-1-mgorman@techsingularity.net
Link: https://lkml.kernel.org/r/20221118101714.19590-2-mgorman@techsingularity.net
Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
Reported-by: Hugh Dickins <hughd@google.com>
Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Yu Zhao <yuzhao@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
mm/page_alloc.c

index c33b696..ca889fb 100644 (file)
@@ -3553,6 +3553,8 @@ void free_unref_page_list(struct list_head *list)
        list_for_each_entry_safe(page, next, list, lru) {
                struct zone *zone = page_zone(page);
 
+               list_del(&page->lru);
+
                /* Different zone, different pcp lock. */
                if (zone != locked_zone) {
                        if (pcp)