platform/kernel/linux-starfive.git
2 years agomm/readahead: Switch to page_cache_ra_order
Matthew Wilcox (Oracle) [Sun, 25 Jul 2021 03:26:14 +0000 (23:26 -0400)]
mm/readahead: Switch to page_cache_ra_order

do_page_cache_ra() was being exposed for the benefit of
do_sync_mmap_readahead().  Switch it over to page_cache_ra_order()
partly because it's a better interface but mostly for the benefit of
the next patch.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2 years agomm/readahead: Align file mappings for non-DAX
William Kucharski [Sun, 22 Sep 2019 12:43:15 +0000 (08:43 -0400)]
mm/readahead: Align file mappings for non-DAX

When we have the opportunity to use PMDs to map a file, we want to follow
the same rules as DAX.

Signed-off-by: William Kucharski <william.kucharski@oracle.com>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2 years agomm/readahead: Add large folio readahead
Matthew Wilcox (Oracle) [Wed, 5 Feb 2020 16:27:01 +0000 (11:27 -0500)]
mm/readahead: Add large folio readahead

Allocate large folios in the readahead code when the filesystem supports
them and it seems worth doing.  The heuristic for choosing which folio
sizes will surely need some tuning, but this aggressive ramp-up has been
good for testing.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2 years agomm: Support arbitrary THP sizes
Matthew Wilcox (Oracle) [Sat, 30 May 2020 00:54:38 +0000 (20:54 -0400)]
mm: Support arbitrary THP sizes

For code which has not yet been converted from THP to folios, use the
compound size of the page instead of assuming PTE or PMD size.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2 years agomm: Make large folios depend on THP
Matthew Wilcox (Oracle) [Sun, 16 Jan 2022 04:27:08 +0000 (23:27 -0500)]
mm: Make large folios depend on THP

Some parts of the VM still depend on THP to handle large folios
correctly.  Until those are fixed, prevent creating large folios
if THP are disabled.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2 years agomm: Fix READ_ONLY_THP warning
Matthew Wilcox (Oracle) [Sat, 10 Oct 2020 15:47:55 +0000 (11:47 -0400)]
mm: Fix READ_ONLY_THP warning

These counters only exist if CONFIG_READ_ONLY_THP_FOR_FS is defined,
but we do not need to warn if the filesystem natively supports large
folios.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2 years agomm/filemap: Allow large folios to be added to the page cache
Matthew Wilcox (Oracle) [Thu, 5 Sep 2019 18:03:12 +0000 (14:03 -0400)]
mm/filemap: Allow large folios to be added to the page cache

We return -EEXIST if there are any non-shadow entries in the page
cache in the range covered by the folio.  If there are multiple
shadow entries in the range, we set *shadowp to one of them (currently
the one at the highest index).  If that turns out to be the wrong
answer, we can implement something more complex.  This is mostly
modelled after the equivalent function in the shmem code.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2 years agomm: Turn can_split_huge_page() into can_split_folio()
Matthew Wilcox (Oracle) [Fri, 4 Feb 2022 19:13:31 +0000 (14:13 -0500)]
mm: Turn can_split_huge_page() into can_split_folio()

This function already required a head page to be passed, so this
just adds type-safety and removes a few implicit calls to
compound_head().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2 years agomm/vmscan: Convert pageout() to take a folio
Matthew Wilcox (Oracle) [Tue, 18 Jan 2022 04:35:57 +0000 (23:35 -0500)]
mm/vmscan: Convert pageout() to take a folio

We always write out an entire folio at once.  This conversion removes
a few calls to compound_head() and gets the NR_VMSCAN_WRITE statistic
right when writing out a large folio.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2 years agomm/vmscan: Turn page_check_references() into folio_check_references()
Matthew Wilcox (Oracle) [Tue, 15 Feb 2022 18:44:40 +0000 (13:44 -0500)]
mm/vmscan: Turn page_check_references() into folio_check_references()

This function only has one caller, and it already has a folio.  This
removes a number of calls to compound_head().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2 years agomm/vmscan: Account large folios correctly
Matthew Wilcox (Oracle) [Mon, 17 Jan 2022 19:46:55 +0000 (14:46 -0500)]
mm/vmscan: Account large folios correctly

The statistics we gather should count the number of pages, not the
number of folios.  The logic in this function is somewhat convoluted,
but even if we split the folio, I think the accounting is now correct.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2 years agomm/vmscan: Optimise shrink_page_list for non-PMD-sized folios
Matthew Wilcox (Oracle) [Sat, 15 Aug 2020 23:28:51 +0000 (19:28 -0400)]
mm/vmscan: Optimise shrink_page_list for non-PMD-sized folios

A large folio which is smaller than a PMD does not need to do the extra
work in try_to_unmap() of trying to split a PMD entry.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2 years agomm/vmscan: Free non-shmem folios without splitting them
Matthew Wilcox (Oracle) [Wed, 30 Sep 2020 20:15:46 +0000 (16:15 -0400)]
mm/vmscan: Free non-shmem folios without splitting them

We have to allocate memory in order to split a file-backed folio, so
it's not a good idea to split them in the memory freeing path.  It also
doesn't work for XFS because pages have an extra reference count from
page_has_private() and split_huge_page() expects that reference to have
already been removed.  Unfortunately, we still have to split shmem THPs
because we can't handle swapping out an entire THP yet.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2 years agomm/rmap: Constify the rmap_walk_control argument
Matthew Wilcox (Oracle) [Sat, 29 Jan 2022 21:16:54 +0000 (16:16 -0500)]
mm/rmap: Constify the rmap_walk_control argument

The rmap walking functions do not modify the rmap_walk_control, and
page_idle_clear_pte_refs() takes advantage of that to move construction
of the rmap_walk_control to compile time.  This lets us remove an
unclean cast.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2 years agomm/rmap: Convert rmap_walk() to take a folio
Matthew Wilcox (Oracle) [Sat, 29 Jan 2022 21:06:53 +0000 (16:06 -0500)]
mm/rmap: Convert rmap_walk() to take a folio

This ripples all the way through to every calling and called function
from rmap.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2 years agomm: Turn page_anon_vma() into folio_anon_vma()
Matthew Wilcox (Oracle) [Sat, 29 Jan 2022 16:52:52 +0000 (11:52 -0500)]
mm: Turn page_anon_vma() into folio_anon_vma()

Move the prototype from mm.h to mm/internal.h and convert all callers
to pass a folio.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2 years agomm/rmap: Turn page_lock_anon_vma_read() into folio_lock_anon_vma_read()
Matthew Wilcox (Oracle) [Wed, 2 Feb 2022 04:33:08 +0000 (23:33 -0500)]
mm/rmap: Turn page_lock_anon_vma_read() into folio_lock_anon_vma_read()

Add back page_lock_anon_vma_read() as a wrapper.  This saves a few calls
to compound_head().  If any callers were passing a tail page before,
this would have failed to lock the anon VMA as page->mapping is not
valid for tail pages.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2 years agomm/damon: Convert damon_pa_young() to use a folio
Matthew Wilcox (Oracle) [Sat, 29 Jan 2022 15:46:04 +0000 (10:46 -0500)]
mm/damon: Convert damon_pa_young() to use a folio

Ensure that we're passing the entire folio to rmap_walk().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2 years agomm/damon: Convert damon_pa_mkold() to use a folio
Matthew Wilcox (Oracle) [Sat, 29 Jan 2022 15:46:04 +0000 (10:46 -0500)]
mm/damon: Convert damon_pa_mkold() to use a folio

Ensure that we're passing the entire folio to rmap_walk().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2 years agomm/migrate: Convert remove_migration_ptes() to folios
Matthew Wilcox (Oracle) [Sat, 29 Jan 2022 04:32:59 +0000 (23:32 -0500)]
mm/migrate: Convert remove_migration_ptes() to folios

Convert the implementation and all callers.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2 years agomm/rmap: Convert make_device_exclusive_range() to use folios
Matthew Wilcox (Oracle) [Fri, 28 Jan 2022 21:03:42 +0000 (16:03 -0500)]
mm/rmap: Convert make_device_exclusive_range() to use folios

Move the PageTail check earlier so we can avoid even taking the folio
lock on tail pages.  Otherwise, this is a straightforward use of
folios throughout.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2 years agomm/rmap: Convert try_to_migrate() to folios
Matthew Wilcox (Oracle) [Fri, 28 Jan 2022 19:29:43 +0000 (14:29 -0500)]
mm/rmap: Convert try_to_migrate() to folios

Convert the callers to pass a folio and the try_to_migrate_one()
worker to use a folio throughout.  Fixes an assumption that a
folio must be <= PMD size.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2 years agomm/rmap: Convert try_to_unmap() to take a folio
Matthew Wilcox (Oracle) [Tue, 15 Feb 2022 14:28:49 +0000 (09:28 -0500)]
mm/rmap: Convert try_to_unmap() to take a folio

Change all three callers and the worker function try_to_unmap_one().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2 years agomm/huge_memory: Convert __split_huge_pmd() to take a folio
Matthew Wilcox (Oracle) [Fri, 21 Jan 2022 15:44:52 +0000 (10:44 -0500)]
mm/huge_memory: Convert __split_huge_pmd() to take a folio

Convert split_huge_pmd_address() at the same time since it only passes
the folio through, and its two callers already have a folio on hand.
Removes numerous calls to compound_head() and removes an assumption
that a page cannot be larger than a PMD.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2 years agomm/rmap: Turn page_referenced() into folio_referenced()
Matthew Wilcox (Oracle) [Fri, 21 Jan 2022 16:27:31 +0000 (11:27 -0500)]
mm/rmap: Turn page_referenced() into folio_referenced()

Both its callers pass a page which was previously on an LRU list,
so were passing a folio by definition.  Use the type system to enforce
that and remove a few calls to compound_head().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
2 years agomm/mlock: Add mlock_vma_folio()
Matthew Wilcox (Oracle) [Tue, 15 Feb 2022 18:33:59 +0000 (13:33 -0500)]
mm/mlock: Add mlock_vma_folio()

Convert mlock_page() into mlock_folio() and convert the callers.  Keep
mlock_vma_page() as a wrapper.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2 years agomm/rmap: Use a folio in page_mkclean_one()
Matthew Wilcox (Oracle) [Thu, 20 Jan 2022 23:20:07 +0000 (18:20 -0500)]
mm/rmap: Use a folio in page_mkclean_one()

folio_mkclean() already passes down a head page, so convert it
back to a folio.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
2 years agomm/page_idle: Convert page_idle_clear_pte_refs() to use a folio
Matthew Wilcox (Oracle) [Sat, 29 Jan 2022 20:53:59 +0000 (15:53 -0500)]
mm/page_idle: Convert page_idle_clear_pte_refs() to use a folio

The PG_idle and PG_young bits are ignored if they're set on tail
pages, so ensure we're passing a folio around.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
2 years agomm: Convert page_vma_mapped_walk to work on PFNs
Matthew Wilcox (Oracle) [Thu, 3 Feb 2022 16:40:17 +0000 (11:40 -0500)]
mm: Convert page_vma_mapped_walk to work on PFNs

page_mapped_in_vma() really just wants to walk one page, but as the
code stands, if passed the head page of a compound page, it will
walk every page in the compound page.  Extract pfn/nr_pages/pgoff
from the struct page early, so they can be overridden by
page_mapped_in_vma().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2 years agosparc32: Add pmd_pfn()
Matthew Wilcox (Oracle) [Tue, 15 Feb 2022 22:50:17 +0000 (17:50 -0500)]
sparc32: Add pmd_pfn()

We need to use this function in common code; pull it out of pmd_page().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2 years agopowerpc: Add pmd_pfn()
Matthew Wilcox (Oracle) [Mon, 14 Feb 2022 15:30:07 +0000 (10:30 -0500)]
powerpc: Add pmd_pfn()

This is straightforward for everything except nohash64 where we
indirect through pmd_page().  There must be a better way to do this.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2 years agomips: Make pmd_pfn() available in all configurations
Matthew Wilcox (Oracle) [Mon, 14 Feb 2022 14:59:11 +0000 (09:59 -0500)]
mips: Make pmd_pfn() available in all configurations

Whether or not the platform supports PMD sized pages, we need to
provide pmd_pfn() for an upcoming patch.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2 years agoarch: Add pmd_pfn() where it is missing
Mike Rapoport [Fri, 4 Feb 2022 18:49:20 +0000 (13:49 -0500)]
arch: Add pmd_pfn() where it is missing

We need to use this function in common code, so define it for
architectures and/or configrations that miss it.  The result of
pmd_pfn() will only be used if TRANSPARENT_HUGEPAGE is enabled,
but a function or macro called pmd_pfn() must be defined, even
on machines with two level page tables.

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2 years agomm: Add DEFINE_PAGE_VMA_WALK and DEFINE_FOLIO_VMA_WALK
Matthew Wilcox (Oracle) [Thu, 3 Feb 2022 14:06:08 +0000 (09:06 -0500)]
mm: Add DEFINE_PAGE_VMA_WALK and DEFINE_FOLIO_VMA_WALK

Instead of declaring a struct page_vma_mapped_walk directly,
use these helpers to allow us to transition to a PFN approach in the
following patches.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2 years agomm: Add folio_pgoff()
Matthew Wilcox (Oracle) [Thu, 3 Feb 2022 04:29:45 +0000 (23:29 -0500)]
mm: Add folio_pgoff()

This is the folio equivalent of page_to_pgoff().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
2 years agomm: Add split_folio_to_list()
Matthew Wilcox (Oracle) [Tue, 18 Jan 2022 13:56:47 +0000 (08:56 -0500)]
mm: Add split_folio_to_list()

This is a convenience function; split_huge_page_to_list() can take
any page in a folio (and does so on purpose because that page will
be the one which keeps the refcount).  But it's convenient for the
callers to pass the folio instead of the first page in the folio.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
2 years agomm: Add folio_mapcount()
Matthew Wilcox (Oracle) [Mon, 17 Jan 2022 21:33:26 +0000 (16:33 -0500)]
mm: Add folio_mapcount()

This implements the same algorithm as total_mapcount(), which is
transformed into a wrapper function.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
2 years agomm: Turn head_compound_mapcount() into folio_entire_mapcount()
Matthew Wilcox (Oracle) [Tue, 18 Jan 2022 15:50:48 +0000 (10:50 -0500)]
mm: Turn head_compound_mapcount() into folio_entire_mapcount()

Adjust documentation to be more clear.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
2 years agomm/vmscan: Turn page_check_dirty_writeback() into folio_check_dirty_writeback()
Matthew Wilcox (Oracle) [Mon, 17 Jan 2022 19:35:22 +0000 (14:35 -0500)]
mm/vmscan: Turn page_check_dirty_writeback() into folio_check_dirty_writeback()

Saves a few calls to compound_head().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
2 years agofs: Move many prototypes to pagemap.h
Matthew Wilcox (Oracle) [Sun, 13 Feb 2022 22:23:58 +0000 (17:23 -0500)]
fs: Move many prototypes to pagemap.h

These functions are page cache functionality and don't need to be
declared in fs.h.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Miaohe Lin <linmiaohe@huawei.com>
2 years agomm/truncate: Combine invalidate_mapping_pagevec() and __invalidate_mapping_pages()
Matthew Wilcox (Oracle) [Sun, 13 Feb 2022 22:22:10 +0000 (17:22 -0500)]
mm/truncate: Combine invalidate_mapping_pagevec() and __invalidate_mapping_pages()

We can save a function call by combining these two functions, which
are identical except for the return value.  Also move the prototype
to mm/internal.h.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Miaohe Lin <linmiaohe@huawei.com>
2 years agomm: Turn deactivate_file_page() into deactivate_file_folio()
Matthew Wilcox (Oracle) [Sun, 13 Feb 2022 21:40:24 +0000 (16:40 -0500)]
mm: Turn deactivate_file_page() into deactivate_file_folio()

This function has one caller which already has a reference to the
page, so we don't need to use get_page_unless_zero().  Also move the
prototype to mm/internal.h.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Miaohe Lin <linmiaohe@huawei.com>
2 years agomm/truncate: Convert __invalidate_mapping_pages() to use a folio
Matthew Wilcox (Oracle) [Sun, 13 Feb 2022 21:38:07 +0000 (16:38 -0500)]
mm/truncate: Convert __invalidate_mapping_pages() to use a folio

Now we can call mapping_evict_folio() instead of invalidate_inode_page()
and save a few calls to compound_head().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Miaohe Lin <linmiaohe@huawei.com>
2 years agomm/truncate: Split invalidate_inode_page() into mapping_evict_folio()
Matthew Wilcox (Oracle) [Sun, 13 Feb 2022 20:22:28 +0000 (15:22 -0500)]
mm/truncate: Split invalidate_inode_page() into mapping_evict_folio()

Some of the callers already have the address_space and can avoid calling
folio_mapping() and checking if the folio was already truncated.  Also
add kernel-doc and fix the return type (in case we ever support folios
larger than 4TB).

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Miaohe Lin <linmiaohe@huawei.com>
2 years agomm: Convert remove_mapping() to take a folio
Matthew Wilcox (Oracle) [Sun, 13 Feb 2022 03:48:55 +0000 (22:48 -0500)]
mm: Convert remove_mapping() to take a folio

Add kernel-doc and return the number of pages removed in order to
get the statistics right in __invalidate_mapping_pages().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Miaohe Lin <linmiaohe@huawei.com>
2 years agomm/truncate: Replace page_mapped() call in invalidate_inode_page()
Matthew Wilcox (Oracle) [Sat, 12 Feb 2022 22:43:16 +0000 (17:43 -0500)]
mm/truncate: Replace page_mapped() call in invalidate_inode_page()

folio_mapped() is expensive because it has to check each page's mapcount
field.  A cheaper check is whether there are any extra references to
the page, other than the one we own, one from the page private data and
the ones held by the page cache.

The call to remove_mapping() will fail in any case if it cannot freeze
the refcount, but failing here avoids cycling the i_pages spinlock.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Miaohe Lin <linmiaohe@huawei.com>
2 years agomm/truncate: Convert invalidate_inode_page() to use a folio
Matthew Wilcox (Oracle) [Sat, 12 Feb 2022 22:39:10 +0000 (17:39 -0500)]
mm/truncate: Convert invalidate_inode_page() to use a folio

This saves a number of calls to compound_head().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Miaohe Lin <linmiaohe@huawei.com>
2 years agomm/truncate: Inline invalidate_complete_page() into its one caller
Matthew Wilcox (Oracle) [Sat, 12 Feb 2022 20:27:42 +0000 (15:27 -0500)]
mm/truncate: Inline invalidate_complete_page() into its one caller

invalidate_inode_page() is the only caller of invalidate_complete_page()
and inlining it reveals that the first check is unnecessary (because we
hold the page locked, and we just retrieved the mapping from the page).
Actually, it does make a difference, in that tail pages no longer fail
at this check, so it's now possible to remove a tail page from a mapping.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: John Hubbard <jhubbard@nvidia.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
2 years agosplice: Use a folio in page_cache_pipe_buf_try_steal()
Matthew Wilcox (Oracle) [Sat, 12 Feb 2022 04:39:03 +0000 (23:39 -0500)]
splice: Use a folio in page_cache_pipe_buf_try_steal()

This saves a lot of calls to compound_head().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Miaohe Lin <linmiaohe@huawei.com>
2 years agomm/vmscan: Convert __remove_mapping() to take a folio
Matthew Wilcox (Oracle) [Thu, 23 Dec 2021 21:50:23 +0000 (16:50 -0500)]
mm/vmscan: Convert __remove_mapping() to take a folio

This removes a few hidden calls to compound_head().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
2 years agomm: Turn putback_lru_page() into folio_putback_lru()
Matthew Wilcox (Oracle) [Fri, 21 Jan 2022 13:41:46 +0000 (08:41 -0500)]
mm: Turn putback_lru_page() into folio_putback_lru()

Add a putback_lru_page() wrapper.  Removes a couple of compound_head()
calls.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
2 years agomm: Add lru_to_folio()
Matthew Wilcox (Oracle) [Mon, 17 Jan 2022 19:40:12 +0000 (14:40 -0500)]
mm: Add lru_to_folio()

Since page->lru occupies the same bytes as compound_head, any page
on the LRU list must be a folio.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2 years agomm/memcg: Convert mem_cgroup_swapout() to take a folio
Matthew Wilcox (Oracle) [Tue, 28 Dec 2021 02:11:34 +0000 (21:11 -0500)]
mm/memcg: Convert mem_cgroup_swapout() to take a folio

This removes an assumption that THPs are the only kind of compound
pages and removes a couple of hidden calls to compound_head.  It
also documents that you can't pass a tail page to mem_cgroup_swapout().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
2 years agomm/workingset: Convert workingset_eviction() to take a folio
Matthew Wilcox (Oracle) [Thu, 23 Dec 2021 21:39:05 +0000 (16:39 -0500)]
mm/workingset: Convert workingset_eviction() to take a folio

This removes an assumption that THPs are the only kind of compound
pages and removes a few hidden calls to compound_head().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
2 years agomm/gup: Convert check_and_migrate_movable_pages() to use a folio
Matthew Wilcox (Oracle) [Thu, 17 Feb 2022 17:46:35 +0000 (12:46 -0500)]
mm/gup: Convert check_and_migrate_movable_pages() to use a folio

Switch from head pages to folios.  This removes an assumption that
THPs are the only way to have a high-order page.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: John Hubbard <jhubbard@nvidia.com>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: William Kucharski <william.kucharski@oracle.com>
2 years agomm: Add three folio wrappers
Matthew Wilcox (Oracle) [Mon, 21 Mar 2022 16:57:38 +0000 (12:57 -0400)]
mm: Add three folio wrappers

folio_is_zone_device() is equivalent to is_zone_device_page(),
folio_is_device_private() is equivalent to is_device_private_page(),
and folio_is_pinnable() is equivalent to is_pinnable_page().

All of these tests return the same result for every page in the folio,
so we can just pass the head page of the folio to the page variant of
the function.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2 years agomm: Turn isolate_lru_page() into folio_isolate_lru()
Matthew Wilcox (Oracle) [Fri, 24 Dec 2021 18:26:22 +0000 (13:26 -0500)]
mm: Turn isolate_lru_page() into folio_isolate_lru()

Add isolate_lru_page() as a wrapper around isolate_lru_folio().
TestClearPageLRU() would have always failed on a tail page, so
returning -EBUSY is the same behaviour.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: John Hubbard <jhubbard@nvidia.com>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: William Kucharski <william.kucharski@oracle.com>
2 years agomm/gup: Turn compound_range_next() into gup_folio_range_next()
Matthew Wilcox (Oracle) [Thu, 23 Dec 2021 15:20:12 +0000 (10:20 -0500)]
mm/gup: Turn compound_range_next() into gup_folio_range_next()

Convert the only caller to work on folios instead of pages.
This removes the last caller of put_compound_head(), so delete it.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: John Hubbard <jhubbard@nvidia.com>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: William Kucharski <william.kucharski@oracle.com>
2 years agomm/gup: Turn compound_next() into gup_folio_next()
Matthew Wilcox (Oracle) [Thu, 23 Dec 2021 04:43:16 +0000 (23:43 -0500)]
mm/gup: Turn compound_next() into gup_folio_next()

Convert both callers to work on folios instead of pages.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: John Hubbard <jhubbard@nvidia.com>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: William Kucharski <william.kucharski@oracle.com>
2 years agomm/gup: Convert gup_huge_pgd() to use a folio
Matthew Wilcox (Oracle) [Thu, 23 Dec 2021 03:30:29 +0000 (22:30 -0500)]
mm/gup: Convert gup_huge_pgd() to use a folio

Use the new folio-based APIs.  This was the last user of
try_grab_compound_head(), so remove it.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: John Hubbard <jhubbard@nvidia.com>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: William Kucharski <william.kucharski@oracle.com>
2 years agomm/gup: Convert gup_huge_pud() to use a folio
Matthew Wilcox (Oracle) [Wed, 22 Dec 2021 23:07:47 +0000 (18:07 -0500)]
mm/gup: Convert gup_huge_pud() to use a folio

Use the new folio-based APIs.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: John Hubbard <jhubbard@nvidia.com>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: William Kucharski <william.kucharski@oracle.com>
2 years agomm/gup: Convert gup_huge_pmd() to use a folio
Matthew Wilcox (Oracle) [Wed, 22 Dec 2021 21:57:23 +0000 (16:57 -0500)]
mm/gup: Convert gup_huge_pmd() to use a folio

Use the new folio-based APIs.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: John Hubbard <jhubbard@nvidia.com>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: William Kucharski <william.kucharski@oracle.com>
2 years agomm/gup: Convert gup_hugepte() to use a folio
Matthew Wilcox (Oracle) [Wed, 22 Dec 2021 21:38:30 +0000 (16:38 -0500)]
mm/gup: Convert gup_hugepte() to use a folio

There should be little to no effect from this patch; just removing
uses of some old APIs.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: John Hubbard <jhubbard@nvidia.com>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: William Kucharski <william.kucharski@oracle.com>
2 years agomm/gup: Convert gup_pte_range() to use a folio
Matthew Wilcox (Oracle) [Fri, 10 Dec 2021 20:54:11 +0000 (15:54 -0500)]
mm/gup: Convert gup_pte_range() to use a folio

We still call try_grab_folio() once per PTE; a future patch could
optimise to just adjust the reference count for each page within
the folio.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: John Hubbard <jhubbard@nvidia.com>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: William Kucharski <william.kucharski@oracle.com>
2 years agomm/hugetlb: Use try_grab_folio() instead of try_grab_compound_head()
Matthew Wilcox (Oracle) [Sat, 8 Jan 2022 05:15:04 +0000 (00:15 -0500)]
mm/hugetlb: Use try_grab_folio() instead of try_grab_compound_head()

follow_hugetlb_page() only cares about success or failure, so it doesn't
need to know the type of the returned pointer, only whether it's NULL
or not.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: John Hubbard <jhubbard@nvidia.com>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: William Kucharski <william.kucharski@oracle.com>
2 years agomm/gup: Add gup_put_folio()
Matthew Wilcox (Oracle) [Fri, 10 Dec 2021 20:39:04 +0000 (15:39 -0500)]
mm/gup: Add gup_put_folio()

Convert put_compound_head() to gup_put_folio() and hpage_pincount_sub()
to folio_pincount_sub().  This removes the last call to put_page_refs(),
so delete it.  Add a temporary put_compound_head() wrapper which will
be deleted by the end of this series.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: John Hubbard <jhubbard@nvidia.com>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: William Kucharski <william.kucharski@oracle.com>
2 years agomm: Remove page_cache_add_speculative() and page_cache_get_speculative()
Matthew Wilcox (Oracle) [Wed, 29 Dec 2021 17:23:55 +0000 (12:23 -0500)]
mm: Remove page_cache_add_speculative() and page_cache_get_speculative()

These wrappers have no more callers, so delete them.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: John Hubbard <jhubbard@nvidia.com>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: William Kucharski <william.kucharski@oracle.com>
2 years agomm/gup: Convert try_grab_page() to use a folio
Matthew Wilcox (Oracle) [Fri, 4 Feb 2022 15:32:01 +0000 (10:32 -0500)]
mm/gup: Convert try_grab_page() to use a folio

Hoist the folio conversion and the folio_ref_count() check to the
top of the function instead of using the one buried in try_get_page().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: John Hubbard <jhubbard@nvidia.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
2 years agomm/gup: Add try_get_folio() and try_grab_folio()
Matthew Wilcox (Oracle) [Fri, 4 Feb 2022 15:27:40 +0000 (10:27 -0500)]
mm/gup: Add try_get_folio() and try_grab_folio()

Convert try_get_compound_head() into try_get_folio() and convert
try_grab_compound_head() into try_grab_folio().  Add a temporary
try_grab_compound_head() wrapper around try_grab_folio() to let us
convert callers individually.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: John Hubbard <jhubbard@nvidia.com>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: William Kucharski <william.kucharski@oracle.com>
2 years agomm: Turn page_maybe_dma_pinned() into folio_maybe_dma_pinned()
Matthew Wilcox (Oracle) [Mon, 27 Dec 2021 23:40:41 +0000 (18:40 -0500)]
mm: Turn page_maybe_dma_pinned() into folio_maybe_dma_pinned()

Replace three calls to compound_head() with one.  This removes the last
user of compound_pincount(), so remove that helper too.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: John Hubbard <jhubbard@nvidia.com>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: William Kucharski <william.kucharski@oracle.com>
2 years agomm: Add folio_pincount_ptr()
Matthew Wilcox (Oracle) [Mon, 27 Dec 2021 23:28:58 +0000 (18:28 -0500)]
mm: Add folio_pincount_ptr()

This is the folio equivalent of compound_pincount_ptr().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: John Hubbard <jhubbard@nvidia.com>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: William Kucharski <william.kucharski@oracle.com>
2 years agomm: Make compound_pincount always available
Matthew Wilcox (Oracle) [Thu, 6 Jan 2022 21:46:43 +0000 (16:46 -0500)]
mm: Make compound_pincount always available

Move compound_pincount from the third page to the second page, which
means it's available for all compound pages.  That lets us delete
hpage_pincount_available().

On 32-bit systems, there isn't enough space for both compound_pincount
and compound_nr in the second page (it would collide with page->private,
which is in use for pages in the swap cache), so revert the optimisation
of storing both compound_order and compound_nr on 32-bit systems.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: John Hubbard <jhubbard@nvidia.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: William Kucharski <william.kucharski@oracle.com>
2 years agomm/gup: Remove hpage_pincount_sub()
Matthew Wilcox (Oracle) [Fri, 7 Jan 2022 19:19:39 +0000 (14:19 -0500)]
mm/gup: Remove hpage_pincount_sub()

Move the assertion (and correct it to be a cheaper variant),
and inline the atomic_sub() operation.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: John Hubbard <jhubbard@nvidia.com>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: William Kucharski <william.kucharski@oracle.com>
2 years agomm/gup: Remove hpage_pincount_add()
Matthew Wilcox (Oracle) [Fri, 7 Jan 2022 19:15:11 +0000 (14:15 -0500)]
mm/gup: Remove hpage_pincount_add()

It's clearer to call atomic_add() in the callers; the assertions clearly
can't fire there because they're part of the condition for calling
atomic_add().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: John Hubbard <jhubbard@nvidia.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
2 years agomm/gup: Handle page split race more efficiently
Matthew Wilcox (Oracle) [Fri, 7 Jan 2022 19:04:55 +0000 (14:04 -0500)]
mm/gup: Handle page split race more efficiently

If we hit the page split race, the current code returns NULL which will
presumably trigger a retry under the mmap_lock.  This isn't necessary;
we can just retry the compound_head() lookup.  This is a very minor
optimisation of an unlikely path, but conceptually it matches (eg)
the page cache RCU-protected lookup.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: John Hubbard <jhubbard@nvidia.com>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: William Kucharski <william.kucharski@oracle.com>
2 years agomm/gup: Remove an assumption of a contiguous memmap
Matthew Wilcox (Oracle) [Fri, 7 Jan 2022 18:45:25 +0000 (13:45 -0500)]
mm/gup: Remove an assumption of a contiguous memmap

This assumption needs the inverse of nth_page(), which is temporarily
named page_nth() until it's renamed later in this series.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: John Hubbard <jhubbard@nvidia.com>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: William Kucharski <william.kucharski@oracle.com>
2 years agomm/gup: Fix some contiguous memmap assumptions
Matthew Wilcox (Oracle) [Fri, 7 Jan 2022 18:25:55 +0000 (13:25 -0500)]
mm/gup: Fix some contiguous memmap assumptions

Several functions in gup.c assume that a compound page has virtually
contiguous page structs.  This isn't true for SPARSEMEM configs unless
SPARSEMEM_VMEMMAP is also set.  Fix them by using nth_page() instead of
plain pointer arithmetic.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: John Hubbard <jhubbard@nvidia.com>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: William Kucharski <william.kucharski@oracle.com>
2 years agomm/gup: Change the calling convention for compound_next()
Matthew Wilcox (Oracle) [Mon, 10 Jan 2022 02:03:47 +0000 (21:03 -0500)]
mm/gup: Change the calling convention for compound_next()

Return the head page instead of storing it to a passed parameter.
Reorder the arguments to match the calling function's arguments.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: John Hubbard <jhubbard@nvidia.com>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: William Kucharski <william.kucharski@oracle.com>
2 years agomm/gup: Optimise compound_range_next()
Matthew Wilcox (Oracle) [Sun, 9 Jan 2022 21:21:23 +0000 (16:21 -0500)]
mm/gup: Optimise compound_range_next()

By definition, a compound page has an order >= 1, so the second half
of the test was redundant.  Also, this cannot be a tail page since
it's the result of calling compound_head(), so use PageHead() instead
of PageCompound().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: John Hubbard <jhubbard@nvidia.com>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: William Kucharski <william.kucharski@oracle.com>
2 years agomm/gup: Change the calling convention for compound_range_next()
Matthew Wilcox (Oracle) [Sun, 9 Jan 2022 21:05:11 +0000 (16:05 -0500)]
mm/gup: Change the calling convention for compound_range_next()

Return the head page instead of storing it to a passed parameter.
Pass the start page directly instead of passing a pointer to it.
Reorder the arguments to match the calling function's arguments.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: John Hubbard <jhubbard@nvidia.com>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: William Kucharski <william.kucharski@oracle.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
2 years agomm/gup: Remove for_each_compound_head()
Matthew Wilcox (Oracle) [Sun, 9 Jan 2022 01:23:46 +0000 (20:23 -0500)]
mm/gup: Remove for_each_compound_head()

This macro doesn't simplify the users; it's easier to just call
compound_next() inside a standard loop.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: John Hubbard <jhubbard@nvidia.com>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: William Kucharski <william.kucharski@oracle.com>
2 years agomm/gup: Remove for_each_compound_range()
Matthew Wilcox (Oracle) [Sun, 9 Jan 2022 01:23:46 +0000 (20:23 -0500)]
mm/gup: Remove for_each_compound_range()

This macro doesn't simplify the users; it's easier to just call
compound_range_next() inside the loop.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: John Hubbard <jhubbard@nvidia.com>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: William Kucharski <william.kucharski@oracle.com>
2 years agomm/gup: Increment the page refcount before the pincount
Matthew Wilcox (Oracle) [Fri, 4 Feb 2022 14:24:26 +0000 (09:24 -0500)]
mm/gup: Increment the page refcount before the pincount

We should always increase the refcount before doing anything else to
the page so that other page users see the elevated refcount first.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: John Hubbard <jhubbard@nvidia.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
2 years agomm: build migrate_vma_* for all configs with ZONE_DEVICE support
Christoph Hellwig [Wed, 16 Feb 2022 04:31:38 +0000 (15:31 +1100)]
mm: build migrate_vma_* for all configs with ZONE_DEVICE support

This code will be used for device coherent memory as well in a bit,
so relax the ifdef a bit.

Link: https://lkml.kernel.org/r/20220210072828.2930359-15-hch@lst.de
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Alistair Popple <apopple@nvidia.com>
Tested-by: "Sierra Guiza, Alejandro (Alex)" <alex.sierra@amd.com>
Cc: Alex Deucher <alexander.deucher@amd.com>
Cc: Ben Skeggs <bskeggs@redhat.com>
Cc: Chaitanya Kulkarni <kch@nvidia.com>
Cc: Christian Knig <christian.koenig@amd.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Felix Kuehling <Felix.Kuehling@amd.com>
Cc: Jason Gunthorpe <jgg@nvidia.com>
Cc: Karol Herbst <kherbst@redhat.com>
Cc: Logan Gunthorpe <logang@deltatee.com>
Cc: Lyude Paul <lyude@redhat.com>
Cc: Miaohe Lin <linmiaohe@huawei.com>
Cc: Muchun Song <songmuchun@bytedance.com>
Cc: "Pan, Xinhui" <Xinhui.Pan@amd.com>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2 years agomm: move the migrate_vma_* device migration code into its own file
Christoph Hellwig [Wed, 16 Feb 2022 04:31:38 +0000 (15:31 +1100)]
mm: move the migrate_vma_* device migration code into its own file

Split the code used to migrate to and from ZONE_DEVICE memory from
migrate.c into a new file.

Link: https://lkml.kernel.org/r/20220210072828.2930359-14-hch@lst.de
Signed-off-by: Christoph Hellwig <hch@lst.de>
Tested-by: "Sierra Guiza, Alejandro (Alex)" <alex.sierra@amd.com>
Cc: Alex Deucher <alexander.deucher@amd.com>
Cc: Alistair Popple <apopple@nvidia.com>
Cc: Ben Skeggs <bskeggs@redhat.com>
Cc: Chaitanya Kulkarni <kch@nvidia.com>
Cc: Christian Knig <christian.koenig@amd.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Felix Kuehling <Felix.Kuehling@amd.com>
Cc: Jason Gunthorpe <jgg@nvidia.com>
Cc: Karol Herbst <kherbst@redhat.com>
Cc: Logan Gunthorpe <logang@deltatee.com>
Cc: Lyude Paul <lyude@redhat.com>
Cc: Miaohe Lin <linmiaohe@huawei.com>
Cc: Muchun Song <songmuchun@bytedance.com>
Cc: "Pan, Xinhui" <Xinhui.Pan@amd.com>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2 years agomm: refactor the ZONE_DEVICE handling in migrate_vma_pages
Christoph Hellwig [Wed, 16 Feb 2022 04:31:37 +0000 (15:31 +1100)]
mm: refactor the ZONE_DEVICE handling in migrate_vma_pages

Make the flow a little more clear and prepare for adding a new
ZONE_DEVICE memory type.

Link: https://lkml.kernel.org/r/20220210072828.2930359-13-hch@lst.de
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Alistair Popple <apopple@nvidia.com>
Tested-by: "Sierra Guiza, Alejandro (Alex)" <alex.sierra@amd.com>
Cc: Alex Deucher <alexander.deucher@amd.com>
Cc: Ben Skeggs <bskeggs@redhat.com>
Cc: Chaitanya Kulkarni <kch@nvidia.com>
Cc: Christian Knig <christian.koenig@amd.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Felix Kuehling <Felix.Kuehling@amd.com>
Cc: Jason Gunthorpe <jgg@nvidia.com>
Cc: Karol Herbst <kherbst@redhat.com>
Cc: Logan Gunthorpe <logang@deltatee.com>
Cc: Lyude Paul <lyude@redhat.com>
Cc: Miaohe Lin <linmiaohe@huawei.com>
Cc: Muchun Song <songmuchun@bytedance.com>
Cc: "Pan, Xinhui" <Xinhui.Pan@amd.com>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2 years agomm: refactor the ZONE_DEVICE handling in migrate_vma_insert_page
Christoph Hellwig [Wed, 16 Feb 2022 04:31:37 +0000 (15:31 +1100)]
mm: refactor the ZONE_DEVICE handling in migrate_vma_insert_page

Make the flow a little more clear and prepare for adding a new
ZONE_DEVICE memory type.

Link: https://lkml.kernel.org/r/20220210072828.2930359-12-hch@lst.de
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Alistair Popple <apopple@nvidia.com>
Tested-by: "Sierra Guiza, Alejandro (Alex)" <alex.sierra@amd.com>
Cc: Alex Deucher <alexander.deucher@amd.com>
Cc: Ben Skeggs <bskeggs@redhat.com>
Cc: Chaitanya Kulkarni <kch@nvidia.com>
Cc: Christian Knig <christian.koenig@amd.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Felix Kuehling <Felix.Kuehling@amd.com>
Cc: Jason Gunthorpe <jgg@nvidia.com>
Cc: Karol Herbst <kherbst@redhat.com>
Cc: Logan Gunthorpe <logang@deltatee.com>
Cc: Lyude Paul <lyude@redhat.com>
Cc: Miaohe Lin <linmiaohe@huawei.com>
Cc: Muchun Song <songmuchun@bytedance.com>
Cc: "Pan, Xinhui" <Xinhui.Pan@amd.com>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2 years agomm: refactor check_and_migrate_movable_pages
Christoph Hellwig [Wed, 16 Feb 2022 04:31:37 +0000 (15:31 +1100)]
mm: refactor check_and_migrate_movable_pages

Remove up to two levels of indentation by using continue statements
and move variables to local scope where possible.

Link: https://lkml.kernel.org/r/20220210072828.2930359-11-hch@lst.de
Signed-off-by: Christoph Hellwig <hch@lst.de>
Tested-by: "Sierra Guiza, Alejandro (Alex)" <alex.sierra@amd.com>
Cc: Alex Deucher <alexander.deucher@amd.com>
Cc: Alistair Popple <apopple@nvidia.com>
Cc: Ben Skeggs <bskeggs@redhat.com>
Cc: Chaitanya Kulkarni <kch@nvidia.com>
Cc: Christian Knig <christian.koenig@amd.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Felix Kuehling <Felix.Kuehling@amd.com>
Cc: Jason Gunthorpe <jgg@nvidia.com>
Cc: Karol Herbst <kherbst@redhat.com>
Cc: Logan Gunthorpe <logang@deltatee.com>
Cc: Lyude Paul <lyude@redhat.com>
Cc: Miaohe Lin <linmiaohe@huawei.com>
Cc: Muchun Song <songmuchun@bytedance.com>
Cc: "Pan, Xinhui" <Xinhui.Pan@amd.com>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2 years agomm: generalize the pgmap based page_free infrastructure
Christoph Hellwig [Wed, 16 Feb 2022 04:31:37 +0000 (15:31 +1100)]
mm: generalize the pgmap based page_free infrastructure

Key off on the existence of ->page_free to prepare for adding support for
more pgmap types that are device managed and thus need the free callback.

Link: https://lkml.kernel.org/r/20220210072828.2930359-10-hch@lst.de
Signed-off-by: Christoph Hellwig <hch@lst.de>
Tested-by: "Sierra Guiza, Alejandro (Alex)" <alex.sierra@amd.com>
Cc: Alex Deucher <alexander.deucher@amd.com>
Cc: Alistair Popple <apopple@nvidia.com>
Cc: Ben Skeggs <bskeggs@redhat.com>
Cc: Chaitanya Kulkarni <kch@nvidia.com>
Cc: Christian Knig <christian.koenig@amd.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Felix Kuehling <Felix.Kuehling@amd.com>
Cc: Jason Gunthorpe <jgg@nvidia.com>
Cc: Karol Herbst <kherbst@redhat.com>
Cc: Logan Gunthorpe <logang@deltatee.com>
Cc: Lyude Paul <lyude@redhat.com>
Cc: Miaohe Lin <linmiaohe@huawei.com>
Cc: Muchun Song <songmuchun@bytedance.com>
Cc: "Pan, Xinhui" <Xinhui.Pan@amd.com>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2 years agofsdax: depend on ZONE_DEVICE || FS_DAX_LIMITED
Christoph Hellwig [Wed, 16 Feb 2022 04:31:37 +0000 (15:31 +1100)]
fsdax: depend on ZONE_DEVICE || FS_DAX_LIMITED

Add a depends on ZONE_DEVICE support or the s390-specific limited DAX
support, as one of the two is required at runtime for fsdax code to
actually work.

Link: https://lkml.kernel.org/r/20220210072828.2930359-9-hch@lst.de
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Logan Gunthorpe <logang@deltatee.com>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Tested-by: "Sierra Guiza, Alejandro (Alex)" <alex.sierra@amd.com>
Cc: Alex Deucher <alexander.deucher@amd.com>
Cc: Alistair Popple <apopple@nvidia.com>
Cc: Ben Skeggs <bskeggs@redhat.com>
Cc: Chaitanya Kulkarni <kch@nvidia.com>
Cc: Christian Knig <christian.koenig@amd.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Felix Kuehling <Felix.Kuehling@amd.com>
Cc: Karol Herbst <kherbst@redhat.com>
Cc: Lyude Paul <lyude@redhat.com>
Cc: Miaohe Lin <linmiaohe@huawei.com>
Cc: Muchun Song <songmuchun@bytedance.com>
Cc: "Pan, Xinhui" <Xinhui.Pan@amd.com>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2 years agomm: remove the extra ZONE_DEVICE struct page refcount
Christoph Hellwig [Wed, 16 Feb 2022 04:31:36 +0000 (15:31 +1100)]
mm: remove the extra ZONE_DEVICE struct page refcount

ZONE_DEVICE struct pages have an extra reference count that complicates
the code for put_page() and several places in the kernel that need to
check the reference count to see that a page is not being used (gup,
compaction, migration, etc.). Clean up the code so the reference count
doesn't need to be treated specially for ZONE_DEVICE pages.

Note that this excludes the special idle page wakeup for fsdax pages,
which still happens at refcount 1.  This is a separate issue and will
be sorted out later.  Given that only fsdax pages require the
notifiacation when the refcount hits 1 now, the PAGEMAP_OPS Kconfig
symbol can go away and be replaced with a FS_DAX check for this hook
in the put_page fastpath.

Based on an earlier patch from Ralph Campbell <rcampbell@nvidia.com>.

Link: https://lkml.kernel.org/r/20220210072828.2930359-8-hch@lst.de
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Logan Gunthorpe <logang@deltatee.com>
Reviewed-by: Ralph Campbell <rcampbell@nvidia.com>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: Dan Williams <dan.j.williams@intel.com>
Acked-by: Felix Kuehling <Felix.Kuehling@amd.com>
Tested-by: "Sierra Guiza, Alejandro (Alex)" <alex.sierra@amd.com>
Cc: Alex Deucher <alexander.deucher@amd.com>
Cc: Alistair Popple <apopple@nvidia.com>
Cc: Ben Skeggs <bskeggs@redhat.com>
Cc: Chaitanya Kulkarni <kch@nvidia.com>
Cc: Christian Knig <christian.koenig@amd.com>
Cc: Karol Herbst <kherbst@redhat.com>
Cc: Lyude Paul <lyude@redhat.com>
Cc: Miaohe Lin <linmiaohe@huawei.com>
Cc: Muchun Song <songmuchun@bytedance.com>
Cc: "Pan, Xinhui" <Xinhui.Pan@amd.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2 years agomm: don't include <linux/memremap.h> in <linux/mm.h>
Christoph Hellwig [Wed, 16 Feb 2022 04:31:36 +0000 (15:31 +1100)]
mm: don't include <linux/memremap.h> in <linux/mm.h>

Move the check for the actual pgmap types that need the free at refcount
one behavior into the out of line helper, and thus avoid the need to
pull memremap.h into mm.h.

Link: https://lkml.kernel.org/r/20220210072828.2930359-7-hch@lst.de
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Logan Gunthorpe <logang@deltatee.com>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: Dan Williams <dan.j.williams@intel.com>
Acked-by: Felix Kuehling <Felix.Kuehling@amd.com>
Tested-by: "Sierra Guiza, Alejandro (Alex)" <alex.sierra@amd.com>
Cc: Alex Deucher <alexander.deucher@amd.com>
Cc: Alistair Popple <apopple@nvidia.com>
Cc: Ben Skeggs <bskeggs@redhat.com>
Cc: Chaitanya Kulkarni <kch@nvidia.com>
Cc: Karol Herbst <kherbst@redhat.com>
Cc: Lyude Paul <lyude@redhat.com>
Cc: Miaohe Lin <linmiaohe@huawei.com>
Cc: Muchun Song <songmuchun@bytedance.com>
Cc: "Pan, Xinhui" <Xinhui.Pan@amd.com>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2 years agomm: simplify freeing of devmap managed pages
Christoph Hellwig [Wed, 16 Feb 2022 04:31:35 +0000 (15:31 +1100)]
mm: simplify freeing of devmap managed pages

Make put_devmap_managed_page return if it took charge of the page
or not and remove the separate page_is_devmap_managed helper.

Link: https://lkml.kernel.org/r/20220210072828.2930359-6-hch@lst.de
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Logan Gunthorpe <logang@deltatee.com>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Reviewed-by: Dan Williams <dan.j.williams@intel.com>
Tested-by: "Sierra Guiza, Alejandro (Alex)" <alex.sierra@amd.com>
Cc: Alex Deucher <alexander.deucher@amd.com>
Cc: Alistair Popple <apopple@nvidia.com>
Cc: Ben Skeggs <bskeggs@redhat.com>
Cc: Christian Knig <christian.koenig@amd.com>
Cc: Felix Kuehling <Felix.Kuehling@amd.com>
Cc: Karol Herbst <kherbst@redhat.com>
Cc: Lyude Paul <lyude@redhat.com>
Cc: Miaohe Lin <linmiaohe@huawei.com>
Cc: Muchun Song <songmuchun@bytedance.com>
Cc: "Pan, Xinhui" <Xinhui.Pan@amd.com>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2 years agomm: move free_devmap_managed_page to memremap.c
Christoph Hellwig [Wed, 16 Feb 2022 04:31:35 +0000 (15:31 +1100)]
mm: move free_devmap_managed_page to memremap.c

free_devmap_managed_page has nothing to do with the code in swap.c,
move it to live with the rest of the code for devmap handling.

Link: https://lkml.kernel.org/r/20220210072828.2930359-5-hch@lst.de
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Logan Gunthorpe <logang@deltatee.com>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Reviewed-by: Muchun Song <songmuchun@bytedance.com>
Reviewed-by: Dan Williams <dan.j.williams@intel.com>
Tested-by: "Sierra Guiza, Alejandro (Alex)" <alex.sierra@amd.com>
Cc: Alex Deucher <alexander.deucher@amd.com>
Cc: Alistair Popple <apopple@nvidia.com>
Cc: Ben Skeggs <bskeggs@redhat.com>
Cc: Christian Knig <christian.koenig@amd.com>
Cc: Felix Kuehling <Felix.Kuehling@amd.com>
Cc: Karol Herbst <kherbst@redhat.com>
Cc: Lyude Paul <lyude@redhat.com>
Cc: Miaohe Lin <linmiaohe@huawei.com>
Cc: "Pan, Xinhui" <Xinhui.Pan@amd.com>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2 years agomm: remove pointless includes from <linux/hmm.h>
Christoph Hellwig [Wed, 16 Feb 2022 04:31:35 +0000 (15:31 +1100)]
mm: remove pointless includes from <linux/hmm.h>

hmm.h pulls in the world for no good reason at all.  Remove the
includes and push a few ones into the users instead.

Link: https://lkml.kernel.org/r/20220210072828.2930359-4-hch@lst.de
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Logan Gunthorpe <logang@deltatee.com>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Reviewed-by: Muchun Song <songmuchun@bytedance.com>
Tested-by: "Sierra Guiza, Alejandro (Alex)" <alex.sierra@amd.com>
Cc: Alex Deucher <alexander.deucher@amd.com>
Cc: Alistair Popple <apopple@nvidia.com>
Cc: Ben Skeggs <bskeggs@redhat.com>
Cc: Christian Knig <christian.koenig@amd.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Felix Kuehling <Felix.Kuehling@amd.com>
Cc: Karol Herbst <kherbst@redhat.com>
Cc: Lyude Paul <lyude@redhat.com>
Cc: Miaohe Lin <linmiaohe@huawei.com>
Cc: "Pan, Xinhui" <Xinhui.Pan@amd.com>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2 years agomm: remove the __KERNEL__ guard from <linux/mm.h>
Christoph Hellwig [Wed, 16 Feb 2022 04:31:35 +0000 (15:31 +1100)]
mm: remove the __KERNEL__ guard from <linux/mm.h>

__KERNEL__ ifdefs don't make sense outside of include/uapi/.

Link: https://lkml.kernel.org/r/20220210072828.2930359-3-hch@lst.de
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Logan Gunthorpe <logang@deltatee.com>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Reviewed-by: Muchun Song <songmuchun@bytedance.com>
Reviewed-by: Dan Williams <dan.j.williams@intel.com>
Tested-by: "Sierra Guiza, Alejandro (Alex)" <alex.sierra@amd.com>
Cc: Alex Deucher <alexander.deucher@amd.com>
Cc: Alistair Popple <apopple@nvidia.com>
Cc: Ben Skeggs <bskeggs@redhat.com>
Cc: Christian Knig <christian.koenig@amd.com>
Cc: Felix Kuehling <Felix.Kuehling@amd.com>
Cc: Karol Herbst <kherbst@redhat.com>
Cc: Lyude Paul <lyude@redhat.com>
Cc: Miaohe Lin <linmiaohe@huawei.com>
Cc: "Pan, Xinhui" <Xinhui.Pan@amd.com>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2 years agomm: remove a pointless CONFIG_ZONE_DEVICE check in memremap_pages
Christoph Hellwig [Wed, 16 Feb 2022 04:31:35 +0000 (15:31 +1100)]
mm: remove a pointless CONFIG_ZONE_DEVICE check in memremap_pages

Patch series "start sorting out the ZONE_DEVICE refcount mess", v2.

This series removes the offset by one refcount for ZONE_DEVICE pages that
are freed back to the driver owning them, which is just device private
ones for now, but also the planned device coherent pages and the ehanced
p2p ones pending.

It does not address the fsdax pages yet, which will be attacked in a
follow on series.

This patch (of 27):

memremap.c is only built when CONFIG_ZONE_DEVICE is set, so remove
the superflous extra check.

Link: https://lkml.kernel.org/r/20220210072828.2930359-1-hch@lst.de
Link: https://lkml.kernel.org/r/20220210072828.2930359-2-hch@lst.de
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Logan Gunthorpe <logang@deltatee.com>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Reviewed-by: Muchun Song <songmuchun@bytedance.com>
Reviewed-by: Dan Williams <dan.j.williams@intel.com>
Reviewed-by: Miaohe Lin <linmiaohe@huawei.com>
Tested-by: "Sierra Guiza, Alejandro (Alex)" <alex.sierra@amd.com>
Cc: Felix Kuehling <Felix.Kuehling@amd.com>
Cc: Alex Deucher <alexander.deucher@amd.com>
Cc: Christian Knig <christian.koenig@amd.com>
Cc: "Pan, Xinhui" <Xinhui.Pan@amd.com>
Cc: Ben Skeggs <bskeggs@redhat.com>
Cc: Karol Herbst <kherbst@redhat.com>
Cc: Lyude Paul <lyude@redhat.com>
Cc: Alistair Popple <apopple@nvidia.com>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2 years agomm/munlock: mlock_vma_page() check against VM_SPECIAL
Hugh Dickins [Thu, 3 Mar 2022 01:35:30 +0000 (17:35 -0800)]
mm/munlock: mlock_vma_page() check against VM_SPECIAL

Although mmap_region() and mlock_fixup() take care that VM_LOCKED
is never left set on a VM_SPECIAL vma, there is an interval while
file->f_op->mmap() is using vm_insert_page(s), when VM_LOCKED may
still be set while VM_SPECIAL bits are added: so mlock_vma_page()
should ignore VM_LOCKED while any VM_SPECIAL bits are set.

This showed up as a "Bad page" still mlocked, when vfree()ing pages
which had been vm_inserted by remap_vmalloc_range_partial(): while
release_pages() and __page_cache_release(), and so put_page(), catch
pages still mlocked when freeing (and clear_page_mlock() caught them
when unmapping), the vfree() path is unprepared for them: fix it?
but these pages should not have been mlocked in the first place.

I assume that an mlockall(MCL_FUTURE) had been done in the past; or
maybe the user got to specify MAP_LOCKED on a vmalloc'ing driver mmap.

Signed-off-by: Hugh Dickins <hughd@google.com>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2 years agomm/thp: shrink_page_list() avoid splitting VM_LOCKED THP
Hugh Dickins [Tue, 15 Feb 2022 02:42:33 +0000 (18:42 -0800)]
mm/thp: shrink_page_list() avoid splitting VM_LOCKED THP

4.8 commit 7751b2da6be0 ("vmscan: split file huge pages before paging
them out") inserted a split_huge_page_to_list() into shrink_page_list()
without considering the mlock case: no problem if the page has already
been marked as Mlocked (the !page_evictable check much higher up will
have skipped all this), but it has always been the case that races or
omissions in setting Mlocked can rely on page reclaim to detect this
and correct it before actually reclaiming - and that remains so, but
what a shame if a hugepage is needlessly split before discovering it.

It is surprising that page_check_references() returns PAGEREF_RECLAIM
when VM_LOCKED, but there was a good reason for that: try_to_unmap_one()
is where the condition is detected and corrected; and until now it could
not be done in page_referenced_one(), because that does not always have
the page locked.  Now that mlock's requirement for page lock has gone,
copy try_to_unmap_one()'s mlock restoration into page_referenced_one(),
and let page_check_references() return PAGEREF_ACTIVATE in this case.

But page_referenced_one() may find a pte mapping one part of a hugepage:
what hold should a pte mapped in a VM_LOCKED area exert over the entire
huge page?  That's debatable.  The approach taken here is to treat that
pte mapping in page_referenced_one() as if not VM_LOCKED, and if no
VM_LOCKED pmd mapping is found later in the walk, and lack of reference
permits, then PAGEREF_RECLAIM take it to attempted splitting as before.

Signed-off-by: Hugh Dickins <hughd@google.com>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2 years agomm/thp: collapse_file() do try_to_unmap(TTU_BATCH_FLUSH)
Hugh Dickins [Tue, 15 Feb 2022 02:40:55 +0000 (18:40 -0800)]
mm/thp: collapse_file() do try_to_unmap(TTU_BATCH_FLUSH)

collapse_file() is using unmap_mapping_pages(1) on each small page found
mapped, unlike others (reclaim, migration, splitting, memory-failure) who
use try_to_unmap().  There are four advantages to try_to_unmap(): first,
its TTU_IGNORE_MLOCK option now avoids leaving mlocked page in pagevec;
second, its vma lookup uses i_mmap_lock_read() not i_mmap_lock_write();
third, it breaks out early if page is not mapped everywhere it might be;
fourth, its TTU_BATCH_FLUSH option can be used, as in page reclaim, to
save up all the TLB flushing until all of the pages have been unmapped.

Wild guess: perhaps it was originally written to use try_to_unmap(),
but hit the VM_BUG_ON_PAGE(page_mapped) after unmapping, because without
TTU_SYNC it may skip page table locks; but unmap_mapping_pages() never
skips them, so fixed the issue.  I did once hit that VM_BUG_ON_PAGE()
since making this change: we could pass TTU_SYNC here, but I think just
delete the check - the race is very rare, this is an ordinary small page
so we don't need to be so paranoid about mapcount surprises, and the
page_ref_freeze() just below already handles the case adequately.

Signed-off-by: Hugh Dickins <hughd@google.com>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>