IB/mlx5: Reset access mask when looping inside page fault handler
authorMoni Shoua <monis@mellanox.com>
Mon, 2 Sep 2019 14:16:07 +0000 (10:16 -0400)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Mon, 16 Sep 2019 06:22:10 +0000 (08:22 +0200)
[ Upstream commit 1abe186ed8a6593069bc122da55fc684383fdc1c ]

If page-fault handler spans multiple MRs then the access mask needs to
be reset before each MR handling or otherwise write access will be
granted to mapped pages instead of read-only.

Cc: <stable@vger.kernel.org> # 3.19
Fixes: 7bdf65d411c1 ("IB/mlx5: Handle page faults")
Reported-by: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Moni Shoua <monis@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
drivers/infiniband/hw/mlx5/odp.c

index 9e1cac8..453e5c4 100644 (file)
@@ -497,7 +497,7 @@ void mlx5_ib_free_implicit_mr(struct mlx5_ib_mr *imr)
 static int pagefault_mr(struct mlx5_ib_dev *dev, struct mlx5_ib_mr *mr,
                        u64 io_virt, size_t bcnt, u32 *bytes_mapped)
 {
-       u64 access_mask = ODP_READ_ALLOWED_BIT;
+       u64 access_mask;
        int npages = 0, page_shift, np;
        u64 start_idx, page_mask;
        struct ib_umem_odp *odp;
@@ -522,6 +522,7 @@ next_mr:
        page_shift = mr->umem->page_shift;
        page_mask = ~(BIT(page_shift) - 1);
        start_idx = (io_virt - (mr->mmkey.iova & page_mask)) >> page_shift;
+       access_mask = ODP_READ_ALLOWED_BIT;
 
        if (mr->umem->writable)
                access_mask |= ODP_WRITE_ALLOWED_BIT;