mm/hmm: Fix error flows in hmm_invalidate_range_start
authorJason Gunthorpe <jgg@mellanox.com>
Fri, 7 Jun 2019 15:10:33 +0000 (12:10 -0300)
committerJason Gunthorpe <jgg@mellanox.com>
Thu, 27 Jun 2019 16:05:02 +0000 (13:05 -0300)
commit5a136b4ae327e7f6be9c984a010df8d7ea5a4f83
treef4de2700081df24f82818149ecc4c1cd14709785
parent14331726a3c47bb1649dab155a84610f509d414e
mm/hmm: Fix error flows in hmm_invalidate_range_start

If the trylock on the hmm->mirrors_sem fails the function will return
without decrementing the notifiers that were previously incremented. Since
the caller will not call invalidate_range_end() on EAGAIN this will result
in notifiers becoming permanently incremented and deadlock.

If the sync_cpu_device_pagetables() required blocking the function will
not return EAGAIN even though the device continues to touch the
pages. This is a violation of the mmu notifier contract.

Switch, and rename, the ranges_lock to a spin lock so we can reliably
obtain it without blocking during error unwind.

The error unwind is necessary since the notifiers count must be held
incremented across the call to sync_cpu_device_pagetables() as we cannot
allow the range to become marked valid by a parallel
invalidate_start/end() pair while doing sync_cpu_device_pagetables().

Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
Reviewed-by: Ralph Campbell <rcampbell@nvidia.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Tested-by: Philip Yang <Philip.Yang@amd.com>
include/linux/hmm.h
mm/hmm.c