RDMA/umem: Use ib_dma_max_seg_size instead of dma_get_max_seg_size
authorChristoph Hellwig <hch@lst.de>
Fri, 6 Nov 2020 18:19:33 +0000 (19:19 +0100)
committerJason Gunthorpe <jgg@nvidia.com>
Thu, 12 Nov 2020 17:33:43 +0000 (13:33 -0400)
RDMA ULPs must not call DMA mapping APIs directly but instead use the
ib_dma_* wrappers.

Fixes: 0c16d9635e3a ("RDMA/umem: Move to allocate SG table from pages")
Link: https://lore.kernel.org/r/20201106181941.1878556-3-hch@lst.de
Reported-by: Jason Gunthorpe <jgg@nvidia.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
drivers/infiniband/core/umem.c

index f1fc7e3..7ca4112 100644 (file)
@@ -229,10 +229,10 @@ struct ib_umem *ib_umem_get(struct ib_device *device, unsigned long addr,
 
                cur_base += ret * PAGE_SIZE;
                npages -= ret;
-               sg = __sg_alloc_table_from_pages(
-                       &umem->sg_head, page_list, ret, 0, ret << PAGE_SHIFT,
-                       dma_get_max_seg_size(device->dma_device), sg, npages,
-                       GFP_KERNEL);
+               sg = __sg_alloc_table_from_pages(&umem->sg_head, page_list, ret,
+                               0, ret << PAGE_SHIFT,
+                               ib_dma_max_seg_size(device), sg, npages,
+                               GFP_KERNEL);
                umem->sg_nents = umem->sg_head.nents;
                if (IS_ERR(sg)) {
                        unpin_user_pages_dirty_lock(page_list, ret, 0);