From: Hillf Danton Date: Tue, 10 Jan 2012 23:08:30 +0000 (-0800) Subject: mm/hugetlb.c: avoid bogus counter of surplus huge page X-Git-Tag: v3.3-rc1~113^2~81 X-Git-Url: http://review.tizen.org/git/?a=commitdiff_plain;h=ea5768c74b8e0d6a866508fc6399d5ff958da5e3;p=profile%2Fcommon%2Fkernel-common.git mm/hugetlb.c: avoid bogus counter of surplus huge page If we have to hand back the newly allocated huge page to page allocator, for any reason, the changed counter should be recovered. This affects only s390 at present. Signed-off-by: Hillf Danton Reviewed-by: Michal Hocko Acked-by: KAMEZAWA Hiroyuki Cc: Martin Schwidefsky Cc: Heiko Carstens Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- diff --git a/mm/hugetlb.c b/mm/hugetlb.c index bb7dc40..ea8c3a4 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -800,7 +800,7 @@ static struct page *alloc_buddy_huge_page(struct hstate *h, int nid) if (page && arch_prepare_hugepage(page)) { __free_pages(page, huge_page_order(h)); - return NULL; + page = NULL; } spin_lock(&hugetlb_lock);