mm, drm/ttm: Fix vm page protection handling
authorThomas Hellstrom <thellstrom@vmware.com>
Fri, 22 Nov 2019 08:34:35 +0000 (09:34 +0100)
committerThomas Hellstrom <thellstrom@vmware.com>
Thu, 16 Jan 2020 09:32:41 +0000 (10:32 +0100)
commit5379e4dd3220e23f68ce70b76b3a52a9a68cee05
tree36945924b0141ad317be66d73676423a17be6c1b
parent574c5b3d0e4c0803d3094fd27f83e161345ebe2f
mm, drm/ttm: Fix vm page protection handling

TTM graphics buffer objects may, transparently to user-space,  move
between IO and system memory. When that happens, all PTEs pointing to the
old location are zapped before the move and then faulted in again if
needed. When that happens, the page protection caching mode- and
encryption bits may change and be different from those of
struct vm_area_struct::vm_page_prot.

We were using an ugly hack to set the page protection correctly.
Fix that and instead export and use vmf_insert_mixed_prot() or use
vmf_insert_pfn_prot().
Also get the default page protection from
struct vm_area_struct::vm_page_prot rather than using vm_get_page_prot().
This way we catch modifications done by the vm system for drivers that
want write-notification.

Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Cc: "Jérôme Glisse" <jglisse@redhat.com>
Cc: "Christian König" <christian.koenig@amd.com>
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Acked-by: Andrew Morton <akpm@linux-foundation.org>
drivers/gpu/drm/ttm/ttm_bo_vm.c
mm/memory.c