memcg: no uncharged pages reach page_cgroup_zoneinfo
authorJohannes Weiner <hannes@cmpxchg.org>
Wed, 23 Mar 2011 23:42:26 +0000 (16:42 -0700)
committerLinus Torvalds <torvalds@linux-foundation.org>
Thu, 24 Mar 2011 02:46:26 +0000 (19:46 -0700)
commitad324e94475a04cfcdfdb11ad20f8ea81268e411
tree4326bb602a3528071ffd6f3030c3a82c76a3454e
parentf212ad7cf9c73f8a7fa160e223dcb3f074441a72
memcg: no uncharged pages reach page_cgroup_zoneinfo

This patch series removes the direct page pointer from struct page_cgroup,
which saves 20% of per-page memcg memory overhead (Fedora and Ubuntu
enable memcg per default, openSUSE apparently too).

The node id or section number is encoded in the remaining free bits of
pc->flags which allows calculating the corresponding page without the
extra pointer.

I ran, what I think is, a worst-case microbenchmark that just cats a large
sparse file to /dev/null, because it means that walking the LRU list on
behalf of per-cgroup reclaim and looking up pages from page_cgroups is
happening constantly and at a high rate.  But it made no measurable
difference.  A profile reported a 0.11% share of the new
lookup_cgroup_page() function in this benchmark.

This patch:

All callsites check PCG_USED before passing pc->mem_cgroup, so the latter
is never NULL.

Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
Acked-by: Balbir Singh <balbir@linux.vnet.ibm.com>
Cc: Minchan Kim <minchan.kim@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
mm/memcontrol.c