khugepage: replace lru_cache_add() with folio_add_lru()
authorVishal Moola (Oracle) <vishal.moola@gmail.com>
Tue, 1 Nov 2022 17:53:25 +0000 (10:53 -0700)
committerAndrew Morton <akpm@linux-foundation.org>
Mon, 12 Dec 2022 02:12:13 +0000 (18:12 -0800)
Replaces some calls with their folio equivalents.  This is in preparation
for the removal of lru_cache_add().  This replaces 3 calls to
compound_head() with 1.

Link: https://lkml.kernel.org/r/20221101175326.13265-5-vishal.moola@gmail.com
Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Miklos Szeredi <mszeredi@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
mm/khugepaged.c

index 78ec277..5a7d2d5 100644 (file)
@@ -2013,6 +2013,7 @@ xa_unlocked:
 
        if (result == SCAN_SUCCEED) {
                struct page *page, *tmp;
+               struct folio *folio;
 
                /*
                 * Replacing old pages with new one has succeeded, now we
@@ -2040,11 +2041,13 @@ xa_unlocked:
                        index++;
                }
 
-               SetPageUptodate(hpage);
-               page_ref_add(hpage, HPAGE_PMD_NR - 1);
+               folio = page_folio(hpage);
+               folio_mark_uptodate(folio);
+               folio_ref_add(folio, HPAGE_PMD_NR - 1);
+
                if (is_shmem)
-                       set_page_dirty(hpage);
-               lru_cache_add(hpage);
+                       folio_mark_dirty(folio);
+               folio_add_lru(folio);
 
                /*
                 * Remove pte page tables, so we can re-fault the page as huge.