s390/mm: fix CMMA vs KSM vs others
authorChristian Borntraeger <borntraeger@de.ibm.com>
Sun, 9 Apr 2017 20:09:38 +0000 (22:09 +0200)
committerMartin Schwidefsky <schwidefsky@de.ibm.com>
Wed, 12 Apr 2017 05:40:24 +0000 (07:40 +0200)
On heavy paging with KSM I see guest data corruption. Turns out that
KSM will add pages to its tree, where the mapping return true for
pte_unused (or might become as such later).  KSM will unmap such pages
and reinstantiate with different attributes (e.g. write protected or
special, e.g. in replace_page or write_protect_page)). This uncovered
a bug in our pagetable handling: We must remove the unused flag as
soon as an entry becomes present again.

Cc: stable@vger.kernel.org
Signed-of-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
arch/s390/include/asm/pgtable.h

index 93e37b12e88237766821369e19827e5e2d844a1b..ecec682bb5166a41d9c86025fd3f4e9461f6ec71 100644 (file)
@@ -1051,6 +1051,8 @@ static inline void set_pte_at(struct mm_struct *mm, unsigned long addr,
 {
        if (!MACHINE_HAS_NX)
                pte_val(entry) &= ~_PAGE_NOEXEC;
+       if (pte_present(entry))
+               pte_val(entry) &= ~_PAGE_UNUSED;
        if (mm_has_pgste(mm))
                ptep_set_pte_at(mm, addr, ptep, entry);
        else