mm/huge_memory.c: fix modifying of page protection by insert_pfn_pmd()
authorAneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Sat, 6 Apr 2019 01:39:10 +0000 (18:39 -0700)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Wed, 17 Apr 2019 06:38:49 +0000 (08:38 +0200)
commit9a62d69114d73e706fc03420314d8e9622cd580a
tree883889564012e21a149da57b73a5a3563acd8d50
parentb3a8a3728d7e88c11aec68322223a26636afc86f
mm/huge_memory.c: fix modifying of page protection by insert_pfn_pmd()

commit c6f3c5ee40c10bb65725047a220570f718507001 upstream.

With some architectures like ppc64, set_pmd_at() cannot cope with a
situation where there is already some (different) valid entry present.

Use pmdp_set_access_flags() instead to modify the pfn which is built to
deal with modifying existing PMD entries.

This is similar to commit cae85cb8add3 ("mm/memory.c: fix modifying of
page protection by insert_pfn()")

We also do similar update w.r.t insert_pfn_pud eventhough ppc64 don't
support pud pfn entries now.

Without this patch we also see the below message in kernel log "BUG:
non-zero pgtables_bytes on freeing mm:"

Link: http://lkml.kernel.org/r/20190402115125.18803-1-aneesh.kumar@linux.ibm.com
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Reported-by: Chandan Rajendra <chandan@linux.ibm.com>
Reviewed-by: Jan Kara <jack@suse.cz>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
mm/huge_memory.c