f2fs: convert f2fs_write_cache_pages() to use filemap_get_folios_tag()
authorVishal Moola (Oracle) <vishal.moola@gmail.com>
Wed, 4 Jan 2023 21:14:39 +0000 (13:14 -0800)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Mon, 20 Nov 2023 10:52:09 +0000 (11:52 +0100)
commitec67c83dd59bfb2d347a0a4d8fe4ab5c1292bce1
tree9662da813e513b1b008914d34047dcac739bc97e
parent599befdd799604b70a5f688f98df054dc14b8d0a
f2fs: convert f2fs_write_cache_pages() to use filemap_get_folios_tag()

[ Upstream commit 1cd98ee747cff120ee9b93988ddb7315d8d8f8e7 ]

Convert the function to use a folio_batch instead of pagevec.  This is in
preparation for the removal of find_get_pages_range_tag().

Also modified f2fs_all_cluster_page_ready to take in a folio_batch instead
of pagevec.  This does NOT support large folios.  The function currently
only utilizes folios of size 1 so this shouldn't cause any issues right
now.

This version of the patch limits the number of pages fetched to
F2FS_ONSTACK_PAGES.  If that ever happens, update the start index here
since filemap_get_folios_tag() updates the index to be after the last
found folio, not necessarily the last used page.

Link: https://lkml.kernel.org/r/20230104211448.4804-15-vishal.moola@gmail.com
Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
Acked-by: Chao Yu <chao@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Stable-dep-of: c5d3f9b7649a ("f2fs: compress: fix deadloop in f2fs_write_cache_pages()")
Signed-off-by: Sasha Levin <sashal@kernel.org>
fs/f2fs/data.c