mm: vmscan: simplify lruvec_lru_size()
authorJohannes Weiner <hannes@cmpxchg.org>
Sun, 1 Dec 2019 01:55:31 +0000 (17:55 -0800)
committerMarek Szyprowski <m.szyprowski@samsung.com>
Wed, 17 Jan 2024 17:15:52 +0000 (18:15 +0100)
Patch series "mm: vmscan: cgroup-related cleanups".

Here are 8 patches that clean up the reclaim code's interaction with
cgroups a bit. They're not supposed to change any behavior, just make
the implementation easier to understand and work with.

This patch (of 8):

This function currently takes the node or lruvec size and subtracts the
zones that are excluded by the classzone index of the allocation.  It uses
four different types of counters to do this.

Just add up the eligible zones.

[cai@lca.pw: fix an undefined behavior for zone id]
Link: http://lkml.kernel.org/r/20191108204407.1435-1-cai@lca.pw
[akpm@linux-foundation.org: deal with the MAX_NR_ZONES special case. per Qian Cai]
Link: http://lkml.kernel.org/r/64E60F6F-7582-427B-8DD5-EF97B1656F5A@lca.pw
Link: http://lkml.kernel.org/r/20191022144803.302233-2-hannes@cmpxchg.org
Change-Id: I2d00ef6cd83336d904ac58cea129404c6ba5451c
Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Reviewed-by: Roman Gushchin <guro@fb.com>
Reviewed-by: Shakeel Butt <shakeelb@google.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
[backport of the commit 78a3ee9c29c881feeb716fe70218c4fd2a7c342c from mainline]
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
mm/vmscan.c

index adacd65..cb2ff39 100644 (file)
@@ -351,32 +351,21 @@ unsigned long zone_reclaimable_pages(struct zone *zone)
  */
 unsigned long lruvec_lru_size(struct lruvec *lruvec, enum lru_list lru, int zone_idx)
 {
-       unsigned long lru_size = 0;
+       unsigned long size = 0;
        int zid;
 
-       if (!mem_cgroup_disabled()) {
-               for (zid = 0; zid < MAX_NR_ZONES; zid++)
-                       lru_size += mem_cgroup_get_zone_lru_size(lruvec, lru, zid);
-       } else
-               lru_size = node_page_state(lruvec_pgdat(lruvec), NR_LRU_BASE + lru);
-
-       for (zid = zone_idx + 1; zid < MAX_NR_ZONES; zid++) {
+       for (zid = 0; zid <= zone_idx && zid < MAX_NR_ZONES; zid++) {
                struct zone *zone = &lruvec_pgdat(lruvec)->node_zones[zid];
-               unsigned long size;
 
                if (!managed_zone(zone))
                        continue;
 
                if (!mem_cgroup_disabled())
-                       size = mem_cgroup_get_zone_lru_size(lruvec, lru, zid);
+                       size += mem_cgroup_get_zone_lru_size(lruvec, lru, zid);
                else
-                       size = zone_page_state(&lruvec_pgdat(lruvec)->node_zones[zid],
-                                      NR_ZONE_LRU_BASE + lru);
-               lru_size -= min(size, lru_size);
+                       size += zone_page_state(zone, NR_ZONE_LRU_BASE + lru);
        }
-
-       return lru_size;
-
+       return size;
 }
 
 /*