fs: use nth_page() in place of direct struct page manipulation
authorZi Yan <ziy@nvidia.com>
Wed, 13 Sep 2023 20:12:47 +0000 (16:12 -0400)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Tue, 28 Nov 2023 17:20:05 +0000 (17:20 +0000)
commit 8db0ec791f7788cd21e7f91ee5ff42c1c458d0e7 upstream.

When dealing with hugetlb pages, struct page is not guaranteed to be
contiguous on SPARSEMEM without VMEMMAP.  Use nth_page() to handle it
properly.

Without the fix, a wrong subpage might be checked for HWPoison, causing wrong
number of bytes of a page copied to user space. No bug is reported. The fix
comes from code inspection.

Link: https://lkml.kernel.org/r/20230913201248.452081-5-zi.yan@sent.com
Fixes: 38c1ddbde6c6 ("hugetlbfs: improve read HWPOISON hugepage")
Signed-off-by: Zi Yan <ziy@nvidia.com>
Reviewed-by: Muchun Song <songmuchun@bytedance.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Mike Rapoport (IBM) <rppt@kernel.org>
Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
fs/hugetlbfs/inode.c

index 316c4ce..60fce26 100644 (file)
@@ -295,7 +295,7 @@ static size_t adjust_range_hwpoison(struct page *page, size_t offset, size_t byt
        size_t res = 0;
 
        /* First subpage to start the loop. */
-       page += offset / PAGE_SIZE;
+       page = nth_page(page, offset / PAGE_SIZE);
        offset %= PAGE_SIZE;
        while (1) {
                if (is_raw_hwpoison_page_in_hugepage(page))
@@ -309,7 +309,7 @@ static size_t adjust_range_hwpoison(struct page *page, size_t offset, size_t byt
                        break;
                offset += n;
                if (offset == PAGE_SIZE) {
-                       page++;
+                       page = nth_page(page, 1);
                        offset = 0;
                }
        }