mm/memory_hotplug: don't check for "all holes" in shrink_zone_span()
authorDavid Hildenbrand <david@redhat.com>
Tue, 4 Feb 2020 01:34:16 +0000 (17:34 -0800)
committerLinus Torvalds <torvalds@linux-foundation.org>
Tue, 4 Feb 2020 03:05:23 +0000 (03:05 +0000)
If we have holes, the holes will automatically get detected and removed
once we remove the next bigger/smaller section.  The extra checks can go.

Link: http://lkml.kernel.org/r/20191006085646.5768-9-david@redhat.com
Signed-off-by: David Hildenbrand <david@redhat.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>
Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>
Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Logan Gunthorpe <logang@deltatee.com>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: Pankaj Gupta <pagupta@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
mm/memory_hotplug.c

index 77cb164a2d9664e25f37a09a997affc77b6d647b..61bd62d15fff06259c6229edc6efa9c5b14ab5d8 100644 (file)
@@ -411,6 +411,9 @@ static void shrink_zone_span(struct zone *zone, unsigned long start_pfn,
                if (pfn) {
                        zone->zone_start_pfn = pfn;
                        zone->spanned_pages = zone_end_pfn - pfn;
+               } else {
+                       zone->zone_start_pfn = 0;
+                       zone->spanned_pages = 0;
                }
        } else if (zone_end_pfn == end_pfn) {
                /*
@@ -423,34 +426,11 @@ static void shrink_zone_span(struct zone *zone, unsigned long start_pfn,
                                               start_pfn);
                if (pfn)
                        zone->spanned_pages = pfn - zone_start_pfn + 1;
+               else {
+                       zone->zone_start_pfn = 0;
+                       zone->spanned_pages = 0;
+               }
        }
-
-       /*
-        * The section is not biggest or smallest mem_section in the zone, it
-        * only creates a hole in the zone. So in this case, we need not
-        * change the zone. But perhaps, the zone has only hole data. Thus
-        * it check the zone has only hole or not.
-        */
-       pfn = zone_start_pfn;
-       for (; pfn < zone_end_pfn; pfn += PAGES_PER_SUBSECTION) {
-               if (unlikely(!pfn_to_online_page(pfn)))
-                       continue;
-
-               if (page_zone(pfn_to_page(pfn)) != zone)
-                       continue;
-
-               /* Skip range to be removed */
-               if (pfn >= start_pfn && pfn < end_pfn)
-                       continue;
-
-               /* If we find valid section, we have nothing to do */
-               zone_span_writeunlock(zone);
-               return;
-       }
-
-       /* The zone has no valid section */
-       zone->zone_start_pfn = 0;
-       zone->spanned_pages = 0;
        zone_span_writeunlock(zone);
 }