shmem: fix smaps BUG sleeping while atomic
authorHugh Dickins <hughd@google.com>
Wed, 23 Aug 2023 05:14:47 +0000 (22:14 -0700)
committerAndrew Morton <akpm@linux-foundation.org>
Thu, 24 Aug 2023 21:59:47 +0000 (14:59 -0700)
smaps_pte_hole_lookup() is calling shmem_partial_swap_usage() with page
table lock held: but shmem_partial_swap_usage() does cond_resched_rcu() if
need_resched(): "BUG: sleeping function called from invalid context".

Since shmem_partial_swap_usage() is designed to count across a range, but
smaps_pte_hole_lookup() only calls it for a single page slot, just break
out of the loop on the last or only page, before checking need_resched().

Link: https://lkml.kernel.org/r/6fe3b3ec-abdf-332f-5c23-6a3b3a3b11a9@google.com
Fixes: 230100321518 ("mm/smaps: simplify shmem handling of pte holes")
Signed-off-by: Hugh Dickins <hughd@google.com>
Acked-by: Peter Xu <peterx@redhat.com>
Cc: <stable@vger.kernel.org> [5.16+]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
mm/shmem.c

index f5af4b943e4286e3b414d40c0ebb09d3a6ec0d41..d963c747dabca5e075482c257aad6efcd8e03eaa 100644 (file)
@@ -806,14 +806,16 @@ unsigned long shmem_partial_swap_usage(struct address_space *mapping,
        XA_STATE(xas, &mapping->i_pages, start);
        struct page *page;
        unsigned long swapped = 0;
+       unsigned long max = end - 1;
 
        rcu_read_lock();
-       xas_for_each(&xas, page, end - 1) {
+       xas_for_each(&xas, page, max) {
                if (xas_retry(&xas, page))
                        continue;
                if (xa_is_value(page))
                        swapped++;
-
+               if (xas.xa_index == max)
+                       break;
                if (need_resched()) {
                        xas_pause(&xas);
                        cond_resched_rcu();