mm/vmalloc: use batched page requests in bulk-allocator
authorUladzislau Rezki (Sony) <urezki@gmail.com>
Thu, 2 Sep 2021 21:57:16 +0000 (14:57 -0700)
committerLinus Torvalds <torvalds@linux-foundation.org>
Fri, 3 Sep 2021 16:58:14 +0000 (09:58 -0700)
In case of simultaneous vmalloc allocations, for example it is 1GB and 12
CPUs my system is able to hit "BUG: soft lockup" for !CONFIG_PREEMPT
kernel.

  RIP: 0010:__alloc_pages_bulk+0xa9f/0xbb0
  Call Trace:
   __vmalloc_node_range+0x11c/0x2d0
   __vmalloc_node+0x4b/0x70
   fix_size_alloc_test+0x44/0x60 [test_vmalloc]
   test_func+0xe7/0x1f0 [test_vmalloc]
   kthread+0x11a/0x140
   ret_from_fork+0x22/0x30

To address this issue invoke a bulk-allocator many times until all pages
are obtained, i.e.  do batched page requests adding cond_resched()
meanwhile to reschedule.  Batched value is hard-coded and is 100 pages per
call.

Link: https://lkml.kernel.org/r/20210707182639.31282-1-urezki@gmail.com
Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Hillf Danton <hdanton@sina.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Oleksiy Avramchenko <oleksiy.avramchenko@sonymobile.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
mm/vmalloc.c

index d5cd528..24bc65f 100644 (file)
@@ -2779,7 +2779,7 @@ EXPORT_SYMBOL_GPL(vmap_pfn);
 
 static inline unsigned int
 vm_area_alloc_pages(gfp_t gfp, int nid,
-               unsigned int order, unsigned long nr_pages, struct page **pages)
+               unsigned int order, unsigned int nr_pages, struct page **pages)
 {
        unsigned int nr_allocated = 0;
 
@@ -2789,10 +2789,32 @@ vm_area_alloc_pages(gfp_t gfp, int nid,
         * to fails, fallback to a single page allocator that is
         * more permissive.
         */
-       if (!order)
-               nr_allocated = alloc_pages_bulk_array_node(
-                       gfp, nid, nr_pages, pages);
-       else
+       if (!order) {
+               while (nr_allocated < nr_pages) {
+                       unsigned int nr, nr_pages_request;
+
+                       /*
+                        * A maximum allowed request is hard-coded and is 100
+                        * pages per call. That is done in order to prevent a
+                        * long preemption off scenario in the bulk-allocator
+                        * so the range is [1:100].
+                        */
+                       nr_pages_request = min(100U, nr_pages - nr_allocated);
+
+                       nr = alloc_pages_bulk_array_node(gfp, nid,
+                               nr_pages_request, pages + nr_allocated);
+
+                       nr_allocated += nr;
+                       cond_resched();
+
+                       /*
+                        * If zero or pages were obtained partly,
+                        * fallback to a single page allocator.
+                        */
+                       if (nr != nr_pages_request)
+                               break;
+               }
+       } else
                /*
                 * Compound pages required for remap_vmalloc_page if
                 * high-order pages.