SLOB has alloced smaller objects from their own list in reduce overall external
fragmentation and increase repeatability, free to their own list also.
This is /proc/meminfo result in my test machine:
without this patch:
===
MemTotal: 1030720 kB
MemFree: 750012 kB
Buffers: 15496 kB
Cached: 160396 kB
SwapCached: 0 kB
Active: 105024 kB
Inactive: 145604 kB
Active(anon): 74816 kB
Inactive(anon): 2180 kB
Active(file): 30208 kB
Inactive(file): 143424 kB
Unevictable: 16 kB
....
with this patch:
===
MemTotal: 1030720 kB
MemFree: 751908 kB
Buffers: 15492 kB
Cached: 160280 kB
SwapCached: 0 kB
Active: 102720 kB
Inactive: 146140 kB
Active(anon): 73168 kB
Inactive(anon): 2180 kB
Active(file): 29552 kB
Inactive(file): 143960 kB
Unevictable: 16 kB
...
The result shows an improvement of 1 MB!
And when I tested it on a embeded system with 64 MB, I found this path is never
called during kernel bootup.
Acked-by: Matt Mackall <mpm@selenic.com>
Signed-off-by: Bob Liu <lliubbo@gmail.com>
Signed-off-by: Pekka Enberg <penberg@cs.helsinki.fi>
slob_t *prev, *next, *b = (slob_t *)block;
slobidx_t units;
unsigned long flags;
+ struct list_head *slob_list;
if (unlikely(ZERO_OR_NULL_PTR(block)))
return;
set_slob(b, units,
(void *)((unsigned long)(b +
SLOB_UNITS(PAGE_SIZE)) & PAGE_MASK));
- set_slob_page_free(sp, &free_slob_small);
+ if (size < SLOB_BREAK1)
+ slob_list = &free_slob_small;
+ else if (size < SLOB_BREAK2)
+ slob_list = &free_slob_medium;
+ else
+ slob_list = &free_slob_large;
+ set_slob_page_free(sp, slob_list);
goto out;
}