mm, thp: do not set PTE_SPECIAL for huge zero page
authorSung-hun Kim <sfoon.kim@samsung.com>
Fri, 1 Oct 2021 04:06:07 +0000 (13:06 +0900)
committerHoegeun Kwon <hoegeun.kwon@samsung.com>
Mon, 7 Feb 2022 08:01:41 +0000 (17:01 +0900)
In previous version of the kernel, a huge zero page is remapped
to normal pte mappings with PTE_SPECIAL flag when the split of
hugepage is requested. It makes a buggy situation when the
kernel tries to find a page with vm_normal_page.

This patch resolves this problem by adding a condition to if-
statement.

Change-Id: I62946d3c3e92be309ccbe987f24a33503a7e23dc
Signed-off-by: Sung-hun Kim <sfoon.kim@samsung.com>
mm/huge_memory.c

index aedff5b..2ded085 100644 (file)
@@ -2299,7 +2299,7 @@ void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
 
 repeat:
 #ifdef CONFIG_FINEGRAINED_THP
-       if (pmd_trans_huge(*pmd) && !vm_normal_page_pmd(vma, address, *pmd)) {
+       if (pmd_trans_huge(*pmd) && !vm_normal_page_pmd(vma, address, *pmd) && !is_huge_zero_pmd(*pmd)) {
                struct mm_struct *mm = vma->vm_mm;
                unsigned long haddr = address & HPAGE_PMD_MASK;
                pmd_t orig_pmd;